00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2456 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3721 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.282 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.282 The recommended git tool is: git 00:00:00.282 using credential 00000000-0000-0000-0000-000000000002 00:00:00.284 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.332 Fetching changes from the remote Git repository 00:00:00.333 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.380 Using shallow fetch with depth 1 00:00:00.380 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.380 > git --version # timeout=10 00:00:00.414 > git --version # 'git version 2.39.2' 00:00:00.414 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.441 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.441 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.468 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.478 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.489 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.489 > git config core.sparsecheckout # timeout=10 00:00:06.500 > git read-tree -mu HEAD # timeout=10 00:00:06.514 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.532 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.532 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.602 [Pipeline] Start of Pipeline 00:00:06.616 [Pipeline] library 00:00:06.618 Loading library shm_lib@master 00:00:06.618 Library shm_lib@master is cached. Copying from home. 00:00:06.635 [Pipeline] node 00:00:06.663 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:06.664 [Pipeline] { 00:00:06.671 [Pipeline] catchError 00:00:06.672 [Pipeline] { 00:00:06.683 [Pipeline] wrap 00:00:06.693 [Pipeline] { 00:00:06.698 [Pipeline] stage 00:00:06.699 [Pipeline] { (Prologue) 00:00:06.947 [Pipeline] sh 00:00:07.848 + logger -p user.info -t JENKINS-CI 00:00:07.881 [Pipeline] echo 00:00:07.883 Node: WFP21 00:00:07.890 [Pipeline] sh 00:00:08.231 [Pipeline] setCustomBuildProperty 00:00:08.241 [Pipeline] echo 00:00:08.243 Cleanup processes 00:00:08.247 [Pipeline] sh 00:00:08.541 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.541 6358 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.554 [Pipeline] sh 00:00:08.847 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.847 ++ grep -v 'sudo pgrep' 00:00:08.847 ++ awk '{print $1}' 00:00:08.847 + sudo kill -9 00:00:08.847 + true 00:00:08.863 [Pipeline] cleanWs 00:00:08.873 [WS-CLEANUP] Deleting project workspace... 00:00:08.873 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.886 [WS-CLEANUP] done 00:00:08.890 [Pipeline] setCustomBuildProperty 00:00:08.904 [Pipeline] sh 00:00:09.198 + sudo git config --global --replace-all safe.directory '*' 00:00:09.289 [Pipeline] httpRequest 00:00:11.259 [Pipeline] echo 00:00:11.262 Sorcerer 10.211.164.20 is alive 00:00:11.272 [Pipeline] retry 00:00:11.273 [Pipeline] { 00:00:11.287 [Pipeline] httpRequest 00:00:11.293 HttpMethod: GET 00:00:11.294 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.294 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.307 Response Code: HTTP/1.1 200 OK 00:00:11.307 Success: Status code 200 is in the accepted range: 200,404 00:00:11.307 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:36.574 [Pipeline] } 00:00:36.590 [Pipeline] // retry 00:00:36.598 [Pipeline] sh 00:00:36.892 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:36.909 [Pipeline] httpRequest 00:00:37.627 [Pipeline] echo 00:00:37.629 Sorcerer 10.211.164.20 is alive 00:00:37.638 [Pipeline] retry 00:00:37.639 [Pipeline] { 00:00:37.653 [Pipeline] httpRequest 00:00:37.658 HttpMethod: GET 00:00:37.659 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:37.660 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:37.669 Response Code: HTTP/1.1 200 OK 00:00:37.669 Success: Status code 200 is in the accepted range: 200,404 00:00:37.669 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:02:05.685 [Pipeline] } 00:02:05.702 [Pipeline] // retry 00:02:05.709 [Pipeline] sh 00:02:06.008 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:02:08.570 [Pipeline] sh 00:02:08.862 + git -C spdk log --oneline -n5 00:02:08.862 e01cb43b8 mk/spdk.common.mk sed the minor version 00:02:08.862 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:02:08.862 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:02:08.862 66289a6db build: use VERSION file for storing version 00:02:08.862 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:02:08.880 [Pipeline] withCredentials 00:02:08.891 > git --version # timeout=10 00:02:08.902 > git --version # 'git version 2.39.2' 00:02:08.929 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:02:08.931 [Pipeline] { 00:02:08.939 [Pipeline] retry 00:02:08.940 [Pipeline] { 00:02:08.953 [Pipeline] sh 00:02:09.488 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:02:09.762 [Pipeline] } 00:02:09.781 [Pipeline] // retry 00:02:09.786 [Pipeline] } 00:02:09.803 [Pipeline] // withCredentials 00:02:09.813 [Pipeline] httpRequest 00:02:10.547 [Pipeline] echo 00:02:10.549 Sorcerer 10.211.164.20 is alive 00:02:10.558 [Pipeline] retry 00:02:10.560 [Pipeline] { 00:02:10.574 [Pipeline] httpRequest 00:02:10.579 HttpMethod: GET 00:02:10.579 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:10.580 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:10.590 Response Code: HTTP/1.1 200 OK 00:02:10.590 Success: Status code 200 is in the accepted range: 200,404 00:02:10.591 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:15.573 [Pipeline] } 00:02:15.589 [Pipeline] // retry 00:02:15.596 [Pipeline] sh 00:02:15.885 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:17.281 [Pipeline] sh 00:02:17.573 + git -C dpdk log --oneline -n5 00:02:17.573 caf0f5d395 version: 22.11.4 00:02:17.573 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:17.573 dc9c799c7d vhost: fix missing spinlock unlock 00:02:17.573 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:17.573 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:17.584 [Pipeline] } 00:02:17.597 [Pipeline] // stage 00:02:17.604 [Pipeline] stage 00:02:17.606 [Pipeline] { (Prepare) 00:02:17.621 [Pipeline] writeFile 00:02:17.636 [Pipeline] sh 00:02:17.922 + logger -p user.info -t JENKINS-CI 00:02:17.936 [Pipeline] sh 00:02:18.224 + logger -p user.info -t JENKINS-CI 00:02:18.237 [Pipeline] sh 00:02:18.526 + cat autorun-spdk.conf 00:02:18.526 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:18.526 SPDK_TEST_NVMF=1 00:02:18.526 SPDK_TEST_NVME_CLI=1 00:02:18.526 SPDK_TEST_NVMF_NICS=mlx5 00:02:18.526 SPDK_RUN_UBSAN=1 00:02:18.526 NET_TYPE=phy 00:02:18.526 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:18.526 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:02:18.535 RUN_NIGHTLY=1 00:02:18.539 [Pipeline] readFile 00:02:18.571 [Pipeline] withEnv 00:02:18.572 [Pipeline] { 00:02:18.582 [Pipeline] sh 00:02:18.873 + set -ex 00:02:18.873 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:02:18.873 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:18.873 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:18.873 ++ SPDK_TEST_NVMF=1 00:02:18.873 ++ SPDK_TEST_NVME_CLI=1 00:02:18.873 ++ SPDK_TEST_NVMF_NICS=mlx5 00:02:18.873 ++ SPDK_RUN_UBSAN=1 00:02:18.873 ++ NET_TYPE=phy 00:02:18.873 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:18.873 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:02:18.873 ++ RUN_NIGHTLY=1 00:02:18.873 + case $SPDK_TEST_NVMF_NICS in 00:02:18.873 + DRIVERS=mlx5_ib 00:02:18.873 + [[ -n mlx5_ib ]] 00:02:18.873 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:18.873 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:25.461 rmmod: ERROR: Module irdma is not currently loaded 00:02:25.461 rmmod: ERROR: Module i40iw is not currently loaded 00:02:25.461 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:25.461 + true 00:02:25.461 + for D in $DRIVERS 00:02:25.461 + sudo modprobe mlx5_ib 00:02:25.461 + exit 0 00:02:25.471 [Pipeline] } 00:02:25.485 [Pipeline] // withEnv 00:02:25.491 [Pipeline] } 00:02:25.503 [Pipeline] // stage 00:02:25.512 [Pipeline] catchError 00:02:25.514 [Pipeline] { 00:02:25.528 [Pipeline] timeout 00:02:25.528 Timeout set to expire in 1 hr 0 min 00:02:25.530 [Pipeline] { 00:02:25.543 [Pipeline] stage 00:02:25.544 [Pipeline] { (Tests) 00:02:25.558 [Pipeline] sh 00:02:25.851 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:02:25.851 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:02:25.851 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:02:25.851 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:02:25.851 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:25.851 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:02:25.851 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:02:25.851 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:02:25.851 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:02:25.851 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:02:25.851 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:02:25.851 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:02:25.851 + source /etc/os-release 00:02:25.851 ++ NAME='Fedora Linux' 00:02:25.851 ++ VERSION='39 (Cloud Edition)' 00:02:25.851 ++ ID=fedora 00:02:25.851 ++ VERSION_ID=39 00:02:25.851 ++ VERSION_CODENAME= 00:02:25.851 ++ PLATFORM_ID=platform:f39 00:02:25.851 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:25.851 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:25.851 ++ LOGO=fedora-logo-icon 00:02:25.851 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:25.851 ++ HOME_URL=https://fedoraproject.org/ 00:02:25.851 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:25.851 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:25.851 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:25.851 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:25.851 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:25.851 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:25.851 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:25.851 ++ SUPPORT_END=2024-11-12 00:02:25.851 ++ VARIANT='Cloud Edition' 00:02:25.851 ++ VARIANT_ID=cloud 00:02:25.852 + uname -a 00:02:25.852 Linux spdk-wfp-21 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:25.852 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:02:29.161 Hugepages 00:02:29.161 node hugesize free / total 00:02:29.161 node0 1048576kB 0 / 0 00:02:29.161 node0 2048kB 0 / 0 00:02:29.161 node1 1048576kB 0 / 0 00:02:29.161 node1 2048kB 0 / 0 00:02:29.161 00:02:29.161 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:29.161 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:29.161 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:29.161 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:29.161 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:29.161 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:29.161 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:29.161 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:29.161 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:29.161 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:29.161 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:29.161 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:29.161 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:29.161 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:29.161 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:29.161 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:29.161 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:29.161 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:29.161 + rm -f /tmp/spdk-ld-path 00:02:29.161 + source autorun-spdk.conf 00:02:29.161 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:29.161 ++ SPDK_TEST_NVMF=1 00:02:29.161 ++ SPDK_TEST_NVME_CLI=1 00:02:29.161 ++ SPDK_TEST_NVMF_NICS=mlx5 00:02:29.161 ++ SPDK_RUN_UBSAN=1 00:02:29.161 ++ NET_TYPE=phy 00:02:29.161 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:29.161 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:02:29.161 ++ RUN_NIGHTLY=1 00:02:29.161 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:29.161 + [[ -n '' ]] 00:02:29.161 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:29.161 + for M in /var/spdk/build-*-manifest.txt 00:02:29.161 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:29.161 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:02:29.161 + for M in /var/spdk/build-*-manifest.txt 00:02:29.161 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:29.161 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:02:29.161 + for M in /var/spdk/build-*-manifest.txt 00:02:29.161 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:29.161 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:02:29.161 ++ uname 00:02:29.161 + [[ Linux == \L\i\n\u\x ]] 00:02:29.161 + sudo dmesg -T 00:02:29.161 + sudo dmesg --clear 00:02:29.161 + dmesg_pid=7453 00:02:29.161 + [[ Fedora Linux == FreeBSD ]] 00:02:29.161 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:29.161 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:29.161 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:29.161 + sudo dmesg -Tw 00:02:29.161 + [[ -x /usr/src/fio-static/fio ]] 00:02:29.161 + export FIO_BIN=/usr/src/fio-static/fio 00:02:29.161 + FIO_BIN=/usr/src/fio-static/fio 00:02:29.161 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:29.161 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:29.161 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:29.161 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:29.161 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:29.161 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:29.161 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:29.161 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:29.161 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:29.161 18:57:03 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:29.161 18:57:03 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:29.161 18:57:03 -- nvmf-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:29.161 18:57:03 -- nvmf-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:29.161 18:57:03 -- nvmf-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:29.161 18:57:03 -- nvmf-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_NICS=mlx5 00:02:29.161 18:57:03 -- nvmf-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:02:29.161 18:57:03 -- nvmf-phy-autotest/autorun-spdk.conf@6 -- $ NET_TYPE=phy 00:02:29.161 18:57:03 -- nvmf-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:29.161 18:57:03 -- nvmf-phy-autotest/autorun-spdk.conf@8 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:02:29.161 18:57:03 -- nvmf-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=1 00:02:29.161 18:57:03 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:29.161 18:57:03 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:29.161 18:57:03 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:29.161 18:57:03 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:29.161 18:57:03 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:29.161 18:57:03 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:29.161 18:57:03 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:29.161 18:57:03 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:29.161 18:57:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.161 18:57:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.162 18:57:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.162 18:57:03 -- paths/export.sh@5 -- $ export PATH 00:02:29.162 18:57:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.162 18:57:03 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:29.162 18:57:03 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:29.162 18:57:03 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734112623.XXXXXX 00:02:29.162 18:57:03 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734112623.BIJcil 00:02:29.162 18:57:03 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:29.162 18:57:03 -- common/autobuild_common.sh@499 -- $ '[' -n v22.11.4 ']' 00:02:29.162 18:57:03 -- common/autobuild_common.sh@500 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:02:29.162 18:57:03 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:02:29.162 18:57:03 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:29.162 18:57:03 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:29.162 18:57:03 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:29.162 18:57:03 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:29.162 18:57:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.162 18:57:03 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:02:29.162 18:57:03 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:29.162 18:57:03 -- pm/common@17 -- $ local monitor 00:02:29.162 18:57:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.162 18:57:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.162 18:57:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.162 18:57:03 -- pm/common@21 -- $ date +%s 00:02:29.162 18:57:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.162 18:57:03 -- pm/common@21 -- $ date +%s 00:02:29.162 18:57:03 -- pm/common@25 -- $ sleep 1 00:02:29.162 18:57:03 -- pm/common@21 -- $ date +%s 00:02:29.162 18:57:03 -- pm/common@21 -- $ date +%s 00:02:29.162 18:57:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734112623 00:02:29.162 18:57:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734112623 00:02:29.162 18:57:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734112623 00:02:29.162 18:57:03 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734112623 00:02:29.422 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734112623_collect-cpu-load.pm.log 00:02:29.422 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734112623_collect-vmstat.pm.log 00:02:29.422 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734112623_collect-cpu-temp.pm.log 00:02:29.422 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734112623_collect-bmc-pm.bmc.pm.log 00:02:30.365 18:57:04 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:30.365 18:57:04 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:30.365 18:57:04 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:30.365 18:57:04 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:30.365 18:57:04 -- spdk/autobuild.sh@16 -- $ date -u 00:02:30.365 Fri Dec 13 05:57:04 PM UTC 2024 00:02:30.365 18:57:04 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:30.365 v25.01-rc1-2-ge01cb43b8 00:02:30.365 18:57:04 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:30.365 18:57:04 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:30.365 18:57:04 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:30.365 18:57:04 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:30.365 18:57:04 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:30.365 18:57:04 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.365 ************************************ 00:02:30.365 START TEST ubsan 00:02:30.365 ************************************ 00:02:30.365 18:57:04 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:30.365 using ubsan 00:02:30.365 00:02:30.365 real 0m0.001s 00:02:30.365 user 0m0.000s 00:02:30.365 sys 0m0.000s 00:02:30.365 18:57:04 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:30.365 18:57:04 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:30.365 ************************************ 00:02:30.365 END TEST ubsan 00:02:30.365 ************************************ 00:02:30.365 18:57:04 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:30.365 18:57:04 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:30.365 18:57:04 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:30.365 18:57:04 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:30.365 18:57:04 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:30.365 18:57:04 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.365 ************************************ 00:02:30.365 START TEST build_native_dpdk 00:02:30.365 ************************************ 00:02:30.365 18:57:04 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/dpdk ]] 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk log --oneline -n 5 00:02:30.365 caf0f5d395 version: 22.11.4 00:02:30.365 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:30.365 dc9c799c7d vhost: fix missing spinlock unlock 00:02:30.365 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:30.365 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 21.11.0 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:30.365 18:57:04 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:30.365 18:57:04 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:30.625 patching file config/rte_config.h 00:02:30.625 Hunk #1 succeeded at 60 (offset 1 line). 00:02:30.625 18:57:04 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 22.11.4 24.07.0 00:02:30.625 18:57:04 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:30.625 18:57:04 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:30.625 18:57:04 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:30.625 18:57:04 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:30.625 18:57:04 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:30.625 18:57:04 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:30.625 18:57:04 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:30.625 18:57:04 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:30.625 18:57:04 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:30.625 18:57:04 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:30.625 18:57:04 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:30.625 18:57:04 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:30.625 18:57:04 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:30.625 18:57:04 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:30.626 18:57:04 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:30.626 patching file lib/pcapng/rte_pcapng.c 00:02:30.626 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:30.626 18:57:04 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 22.11.4 24.07.0 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:30.626 18:57:04 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:30.626 18:57:04 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:30.626 18:57:04 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:30.626 18:57:04 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:30.626 18:57:04 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:30.626 18:57:04 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:37.212 The Meson build system 00:02:37.212 Version: 1.5.0 00:02:37.212 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:02:37.212 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp 00:02:37.212 Build type: native build 00:02:37.212 Program cat found: YES (/usr/bin/cat) 00:02:37.212 Project name: DPDK 00:02:37.212 Project version: 22.11.4 00:02:37.212 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:37.212 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:37.212 Host machine cpu family: x86_64 00:02:37.212 Host machine cpu: x86_64 00:02:37.212 Message: ## Building in Developer Mode ## 00:02:37.212 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:37.212 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:37.212 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:37.212 Program objdump found: YES (/usr/bin/objdump) 00:02:37.212 Program python3 found: YES (/usr/bin/python3) 00:02:37.212 Program cat found: YES (/usr/bin/cat) 00:02:37.212 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:37.212 Checking for size of "void *" : 8 00:02:37.212 Checking for size of "void *" : 8 (cached) 00:02:37.212 Library m found: YES 00:02:37.212 Library numa found: YES 00:02:37.212 Has header "numaif.h" : YES 00:02:37.212 Library fdt found: NO 00:02:37.212 Library execinfo found: NO 00:02:37.212 Has header "execinfo.h" : YES 00:02:37.212 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:37.212 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:37.212 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:37.212 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:37.212 Run-time dependency openssl found: YES 3.1.1 00:02:37.212 Run-time dependency libpcap found: YES 1.10.4 00:02:37.212 Has header "pcap.h" with dependency libpcap: YES 00:02:37.212 Compiler for C supports arguments -Wcast-qual: YES 00:02:37.212 Compiler for C supports arguments -Wdeprecated: YES 00:02:37.212 Compiler for C supports arguments -Wformat: YES 00:02:37.212 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:37.212 Compiler for C supports arguments -Wformat-security: NO 00:02:37.212 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:37.212 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:37.212 Compiler for C supports arguments -Wnested-externs: YES 00:02:37.212 Compiler for C supports arguments -Wold-style-definition: YES 00:02:37.212 Compiler for C supports arguments -Wpointer-arith: YES 00:02:37.212 Compiler for C supports arguments -Wsign-compare: YES 00:02:37.212 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:37.212 Compiler for C supports arguments -Wundef: YES 00:02:37.212 Compiler for C supports arguments -Wwrite-strings: YES 00:02:37.212 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:37.212 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:37.212 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:37.212 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:37.212 Compiler for C supports arguments -mavx512f: YES 00:02:37.212 Checking if "AVX512 checking" compiles: YES 00:02:37.212 Fetching value of define "__SSE4_2__" : 1 00:02:37.212 Fetching value of define "__AES__" : 1 00:02:37.212 Fetching value of define "__AVX__" : 1 00:02:37.212 Fetching value of define "__AVX2__" : 1 00:02:37.212 Fetching value of define "__AVX512BW__" : 1 00:02:37.212 Fetching value of define "__AVX512CD__" : 1 00:02:37.212 Fetching value of define "__AVX512DQ__" : 1 00:02:37.212 Fetching value of define "__AVX512F__" : 1 00:02:37.212 Fetching value of define "__AVX512VL__" : 1 00:02:37.212 Fetching value of define "__PCLMUL__" : 1 00:02:37.212 Fetching value of define "__RDRND__" : 1 00:02:37.212 Fetching value of define "__RDSEED__" : 1 00:02:37.212 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:37.212 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:37.212 Message: lib/kvargs: Defining dependency "kvargs" 00:02:37.212 Message: lib/telemetry: Defining dependency "telemetry" 00:02:37.212 Checking for function "getentropy" : YES 00:02:37.212 Message: lib/eal: Defining dependency "eal" 00:02:37.212 Message: lib/ring: Defining dependency "ring" 00:02:37.212 Message: lib/rcu: Defining dependency "rcu" 00:02:37.212 Message: lib/mempool: Defining dependency "mempool" 00:02:37.212 Message: lib/mbuf: Defining dependency "mbuf" 00:02:37.212 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:37.212 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:37.212 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:37.212 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:37.212 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:37.212 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:37.212 Compiler for C supports arguments -mpclmul: YES 00:02:37.212 Compiler for C supports arguments -maes: YES 00:02:37.212 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:37.212 Compiler for C supports arguments -mavx512bw: YES 00:02:37.212 Compiler for C supports arguments -mavx512dq: YES 00:02:37.212 Compiler for C supports arguments -mavx512vl: YES 00:02:37.212 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:37.212 Compiler for C supports arguments -mavx2: YES 00:02:37.212 Compiler for C supports arguments -mavx: YES 00:02:37.212 Message: lib/net: Defining dependency "net" 00:02:37.212 Message: lib/meter: Defining dependency "meter" 00:02:37.212 Message: lib/ethdev: Defining dependency "ethdev" 00:02:37.212 Message: lib/pci: Defining dependency "pci" 00:02:37.212 Message: lib/cmdline: Defining dependency "cmdline" 00:02:37.212 Message: lib/metrics: Defining dependency "metrics" 00:02:37.212 Message: lib/hash: Defining dependency "hash" 00:02:37.212 Message: lib/timer: Defining dependency "timer" 00:02:37.212 Fetching value of define "__AVX2__" : 1 (cached) 00:02:37.212 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:37.212 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:37.212 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:37.212 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:37.212 Message: lib/acl: Defining dependency "acl" 00:02:37.212 Message: lib/bbdev: Defining dependency "bbdev" 00:02:37.212 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:37.212 Run-time dependency libelf found: YES 0.191 00:02:37.212 Message: lib/bpf: Defining dependency "bpf" 00:02:37.212 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:37.212 Message: lib/compressdev: Defining dependency "compressdev" 00:02:37.212 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:37.212 Message: lib/distributor: Defining dependency "distributor" 00:02:37.212 Message: lib/efd: Defining dependency "efd" 00:02:37.212 Message: lib/eventdev: Defining dependency "eventdev" 00:02:37.212 Message: lib/gpudev: Defining dependency "gpudev" 00:02:37.212 Message: lib/gro: Defining dependency "gro" 00:02:37.212 Message: lib/gso: Defining dependency "gso" 00:02:37.212 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:37.212 Message: lib/jobstats: Defining dependency "jobstats" 00:02:37.212 Message: lib/latencystats: Defining dependency "latencystats" 00:02:37.212 Message: lib/lpm: Defining dependency "lpm" 00:02:37.212 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:37.212 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:37.212 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:37.212 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:37.212 Message: lib/member: Defining dependency "member" 00:02:37.212 Message: lib/pcapng: Defining dependency "pcapng" 00:02:37.212 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:37.212 Message: lib/power: Defining dependency "power" 00:02:37.212 Message: lib/rawdev: Defining dependency "rawdev" 00:02:37.212 Message: lib/regexdev: Defining dependency "regexdev" 00:02:37.212 Message: lib/dmadev: Defining dependency "dmadev" 00:02:37.212 Message: lib/rib: Defining dependency "rib" 00:02:37.212 Message: lib/reorder: Defining dependency "reorder" 00:02:37.212 Message: lib/sched: Defining dependency "sched" 00:02:37.212 Message: lib/security: Defining dependency "security" 00:02:37.213 Message: lib/stack: Defining dependency "stack" 00:02:37.213 Has header "linux/userfaultfd.h" : YES 00:02:37.213 Message: lib/vhost: Defining dependency "vhost" 00:02:37.213 Message: lib/ipsec: Defining dependency "ipsec" 00:02:37.213 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:37.213 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:37.213 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:37.213 Message: lib/fib: Defining dependency "fib" 00:02:37.213 Message: lib/port: Defining dependency "port" 00:02:37.213 Message: lib/pdump: Defining dependency "pdump" 00:02:37.213 Message: lib/table: Defining dependency "table" 00:02:37.213 Message: lib/pipeline: Defining dependency "pipeline" 00:02:37.213 Message: lib/graph: Defining dependency "graph" 00:02:37.213 Message: lib/node: Defining dependency "node" 00:02:37.213 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:37.213 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:37.213 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:37.213 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:37.213 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:37.213 Compiler for C supports arguments -Wno-unused-value: YES 00:02:37.213 Compiler for C supports arguments -Wno-format: YES 00:02:37.213 Compiler for C supports arguments -Wno-format-security: YES 00:02:37.213 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:37.783 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:37.783 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:37.783 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:37.783 Fetching value of define "__AVX2__" : 1 (cached) 00:02:37.783 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:37.783 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:37.783 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:37.783 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:37.783 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:37.783 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:37.783 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:37.783 Configuring doxy-api.conf using configuration 00:02:37.783 Program sphinx-build found: NO 00:02:37.783 Configuring rte_build_config.h using configuration 00:02:37.783 Message: 00:02:37.783 ================= 00:02:37.783 Applications Enabled 00:02:37.783 ================= 00:02:37.783 00:02:37.783 apps: 00:02:37.783 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:37.783 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:37.783 test-security-perf, 00:02:37.783 00:02:37.783 Message: 00:02:37.783 ================= 00:02:37.783 Libraries Enabled 00:02:37.783 ================= 00:02:37.783 00:02:37.783 libs: 00:02:37.783 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:37.783 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:37.783 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:37.783 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:37.783 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:37.783 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:37.783 table, pipeline, graph, node, 00:02:37.783 00:02:37.783 Message: 00:02:37.783 =============== 00:02:37.783 Drivers Enabled 00:02:37.783 =============== 00:02:37.783 00:02:37.783 common: 00:02:37.783 00:02:37.783 bus: 00:02:37.783 pci, vdev, 00:02:37.783 mempool: 00:02:37.783 ring, 00:02:37.783 dma: 00:02:37.783 00:02:37.783 net: 00:02:37.783 i40e, 00:02:37.784 raw: 00:02:37.784 00:02:37.784 crypto: 00:02:37.784 00:02:37.784 compress: 00:02:37.784 00:02:37.784 regex: 00:02:37.784 00:02:37.784 vdpa: 00:02:37.784 00:02:37.784 event: 00:02:37.784 00:02:37.784 baseband: 00:02:37.784 00:02:37.784 gpu: 00:02:37.784 00:02:37.784 00:02:37.784 Message: 00:02:37.784 ================= 00:02:37.784 Content Skipped 00:02:37.784 ================= 00:02:37.784 00:02:37.784 apps: 00:02:37.784 00:02:37.784 libs: 00:02:37.784 kni: explicitly disabled via build config (deprecated lib) 00:02:37.784 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:37.784 00:02:37.784 drivers: 00:02:37.784 common/cpt: not in enabled drivers build config 00:02:37.784 common/dpaax: not in enabled drivers build config 00:02:37.784 common/iavf: not in enabled drivers build config 00:02:37.784 common/idpf: not in enabled drivers build config 00:02:37.784 common/mvep: not in enabled drivers build config 00:02:37.784 common/octeontx: not in enabled drivers build config 00:02:37.784 bus/auxiliary: not in enabled drivers build config 00:02:37.784 bus/dpaa: not in enabled drivers build config 00:02:37.784 bus/fslmc: not in enabled drivers build config 00:02:37.784 bus/ifpga: not in enabled drivers build config 00:02:37.784 bus/vmbus: not in enabled drivers build config 00:02:37.784 common/cnxk: not in enabled drivers build config 00:02:37.784 common/mlx5: not in enabled drivers build config 00:02:37.784 common/qat: not in enabled drivers build config 00:02:37.784 common/sfc_efx: not in enabled drivers build config 00:02:37.784 mempool/bucket: not in enabled drivers build config 00:02:37.784 mempool/cnxk: not in enabled drivers build config 00:02:37.784 mempool/dpaa: not in enabled drivers build config 00:02:37.784 mempool/dpaa2: not in enabled drivers build config 00:02:37.784 mempool/octeontx: not in enabled drivers build config 00:02:37.784 mempool/stack: not in enabled drivers build config 00:02:37.784 dma/cnxk: not in enabled drivers build config 00:02:37.784 dma/dpaa: not in enabled drivers build config 00:02:37.784 dma/dpaa2: not in enabled drivers build config 00:02:37.784 dma/hisilicon: not in enabled drivers build config 00:02:37.784 dma/idxd: not in enabled drivers build config 00:02:37.784 dma/ioat: not in enabled drivers build config 00:02:37.784 dma/skeleton: not in enabled drivers build config 00:02:37.784 net/af_packet: not in enabled drivers build config 00:02:37.784 net/af_xdp: not in enabled drivers build config 00:02:37.784 net/ark: not in enabled drivers build config 00:02:37.784 net/atlantic: not in enabled drivers build config 00:02:37.784 net/avp: not in enabled drivers build config 00:02:37.784 net/axgbe: not in enabled drivers build config 00:02:37.784 net/bnx2x: not in enabled drivers build config 00:02:37.784 net/bnxt: not in enabled drivers build config 00:02:37.784 net/bonding: not in enabled drivers build config 00:02:37.784 net/cnxk: not in enabled drivers build config 00:02:37.784 net/cxgbe: not in enabled drivers build config 00:02:37.784 net/dpaa: not in enabled drivers build config 00:02:37.784 net/dpaa2: not in enabled drivers build config 00:02:37.784 net/e1000: not in enabled drivers build config 00:02:37.784 net/ena: not in enabled drivers build config 00:02:37.784 net/enetc: not in enabled drivers build config 00:02:37.784 net/enetfec: not in enabled drivers build config 00:02:37.784 net/enic: not in enabled drivers build config 00:02:37.784 net/failsafe: not in enabled drivers build config 00:02:37.784 net/fm10k: not in enabled drivers build config 00:02:37.784 net/gve: not in enabled drivers build config 00:02:37.784 net/hinic: not in enabled drivers build config 00:02:37.784 net/hns3: not in enabled drivers build config 00:02:37.784 net/iavf: not in enabled drivers build config 00:02:37.784 net/ice: not in enabled drivers build config 00:02:37.784 net/idpf: not in enabled drivers build config 00:02:37.784 net/igc: not in enabled drivers build config 00:02:37.784 net/ionic: not in enabled drivers build config 00:02:37.784 net/ipn3ke: not in enabled drivers build config 00:02:37.784 net/ixgbe: not in enabled drivers build config 00:02:37.784 net/kni: not in enabled drivers build config 00:02:37.784 net/liquidio: not in enabled drivers build config 00:02:37.784 net/mana: not in enabled drivers build config 00:02:37.784 net/memif: not in enabled drivers build config 00:02:37.784 net/mlx4: not in enabled drivers build config 00:02:37.784 net/mlx5: not in enabled drivers build config 00:02:37.784 net/mvneta: not in enabled drivers build config 00:02:37.784 net/mvpp2: not in enabled drivers build config 00:02:37.784 net/netvsc: not in enabled drivers build config 00:02:37.784 net/nfb: not in enabled drivers build config 00:02:37.784 net/nfp: not in enabled drivers build config 00:02:37.784 net/ngbe: not in enabled drivers build config 00:02:37.784 net/null: not in enabled drivers build config 00:02:37.784 net/octeontx: not in enabled drivers build config 00:02:37.784 net/octeon_ep: not in enabled drivers build config 00:02:37.784 net/pcap: not in enabled drivers build config 00:02:37.784 net/pfe: not in enabled drivers build config 00:02:37.784 net/qede: not in enabled drivers build config 00:02:37.784 net/ring: not in enabled drivers build config 00:02:37.784 net/sfc: not in enabled drivers build config 00:02:37.784 net/softnic: not in enabled drivers build config 00:02:37.784 net/tap: not in enabled drivers build config 00:02:37.784 net/thunderx: not in enabled drivers build config 00:02:37.784 net/txgbe: not in enabled drivers build config 00:02:37.784 net/vdev_netvsc: not in enabled drivers build config 00:02:37.784 net/vhost: not in enabled drivers build config 00:02:37.784 net/virtio: not in enabled drivers build config 00:02:37.784 net/vmxnet3: not in enabled drivers build config 00:02:37.784 raw/cnxk_bphy: not in enabled drivers build config 00:02:37.784 raw/cnxk_gpio: not in enabled drivers build config 00:02:37.784 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:37.784 raw/ifpga: not in enabled drivers build config 00:02:37.784 raw/ntb: not in enabled drivers build config 00:02:37.784 raw/skeleton: not in enabled drivers build config 00:02:37.784 crypto/armv8: not in enabled drivers build config 00:02:37.784 crypto/bcmfs: not in enabled drivers build config 00:02:37.784 crypto/caam_jr: not in enabled drivers build config 00:02:37.784 crypto/ccp: not in enabled drivers build config 00:02:37.784 crypto/cnxk: not in enabled drivers build config 00:02:37.784 crypto/dpaa_sec: not in enabled drivers build config 00:02:37.784 crypto/dpaa2_sec: not in enabled drivers build config 00:02:37.784 crypto/ipsec_mb: not in enabled drivers build config 00:02:37.784 crypto/mlx5: not in enabled drivers build config 00:02:37.784 crypto/mvsam: not in enabled drivers build config 00:02:37.784 crypto/nitrox: not in enabled drivers build config 00:02:37.784 crypto/null: not in enabled drivers build config 00:02:37.784 crypto/octeontx: not in enabled drivers build config 00:02:37.784 crypto/openssl: not in enabled drivers build config 00:02:37.784 crypto/scheduler: not in enabled drivers build config 00:02:37.784 crypto/uadk: not in enabled drivers build config 00:02:37.784 crypto/virtio: not in enabled drivers build config 00:02:37.784 compress/isal: not in enabled drivers build config 00:02:37.784 compress/mlx5: not in enabled drivers build config 00:02:37.784 compress/octeontx: not in enabled drivers build config 00:02:37.784 compress/zlib: not in enabled drivers build config 00:02:37.784 regex/mlx5: not in enabled drivers build config 00:02:37.784 regex/cn9k: not in enabled drivers build config 00:02:37.784 vdpa/ifc: not in enabled drivers build config 00:02:37.784 vdpa/mlx5: not in enabled drivers build config 00:02:37.784 vdpa/sfc: not in enabled drivers build config 00:02:37.784 event/cnxk: not in enabled drivers build config 00:02:37.784 event/dlb2: not in enabled drivers build config 00:02:37.784 event/dpaa: not in enabled drivers build config 00:02:37.784 event/dpaa2: not in enabled drivers build config 00:02:37.784 event/dsw: not in enabled drivers build config 00:02:37.784 event/opdl: not in enabled drivers build config 00:02:37.784 event/skeleton: not in enabled drivers build config 00:02:37.784 event/sw: not in enabled drivers build config 00:02:37.784 event/octeontx: not in enabled drivers build config 00:02:37.784 baseband/acc: not in enabled drivers build config 00:02:37.784 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:37.784 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:37.784 baseband/la12xx: not in enabled drivers build config 00:02:37.784 baseband/null: not in enabled drivers build config 00:02:37.784 baseband/turbo_sw: not in enabled drivers build config 00:02:37.784 gpu/cuda: not in enabled drivers build config 00:02:37.784 00:02:37.784 00:02:37.784 Build targets in project: 311 00:02:37.784 00:02:37.784 DPDK 22.11.4 00:02:37.784 00:02:37.784 User defined options 00:02:37.784 libdir : lib 00:02:37.784 prefix : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:02:37.784 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:37.784 c_link_args : 00:02:37.784 enable_docs : false 00:02:37.784 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:37.784 enable_kmods : false 00:02:37.784 machine : native 00:02:37.784 tests : false 00:02:37.784 00:02:37.784 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:37.784 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:38.051 18:57:12 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 00:02:38.051 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:02:38.051 [1/740] Generating lib/rte_kvargs_mingw with a custom command 00:02:38.051 [2/740] Generating lib/rte_kvargs_def with a custom command 00:02:38.051 [3/740] Generating lib/rte_telemetry_def with a custom command 00:02:38.051 [4/740] Generating lib/rte_telemetry_mingw with a custom command 00:02:38.051 [5/740] Generating lib/rte_ring_def with a custom command 00:02:38.051 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:38.051 [7/740] Generating lib/rte_eal_def with a custom command 00:02:38.051 [8/740] Generating lib/rte_mempool_mingw with a custom command 00:02:38.051 [9/740] Generating lib/rte_mempool_def with a custom command 00:02:38.051 [10/740] Generating lib/rte_eal_mingw with a custom command 00:02:38.051 [11/740] Generating lib/rte_rcu_mingw with a custom command 00:02:38.051 [12/740] Generating lib/rte_meter_def with a custom command 00:02:38.051 [13/740] Generating lib/rte_rcu_def with a custom command 00:02:38.051 [14/740] Generating lib/rte_mbuf_mingw with a custom command 00:02:38.051 [15/740] Generating lib/rte_mbuf_def with a custom command 00:02:38.051 [16/740] Generating lib/rte_ring_mingw with a custom command 00:02:38.051 [17/740] Generating lib/rte_net_mingw with a custom command 00:02:38.051 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:38.051 [19/740] Generating lib/rte_meter_mingw with a custom command 00:02:38.320 [20/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:38.320 [21/740] Generating lib/rte_net_def with a custom command 00:02:38.320 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:38.320 [23/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:38.320 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:38.320 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:38.320 [26/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:38.320 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:38.320 [28/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:38.320 [29/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:38.320 [30/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:38.320 [31/740] Generating lib/rte_pci_mingw with a custom command 00:02:38.320 [32/740] Generating lib/rte_ethdev_mingw with a custom command 00:02:38.320 [33/740] Generating lib/rte_pci_def with a custom command 00:02:38.320 [34/740] Generating lib/rte_ethdev_def with a custom command 00:02:38.320 [35/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:38.320 [36/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:38.320 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:38.320 [38/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:38.320 [39/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:38.320 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:38.320 [41/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:38.320 [42/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:38.320 [43/740] Generating lib/rte_cmdline_def with a custom command 00:02:38.320 [44/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:38.320 [45/740] Linking static target lib/librte_kvargs.a 00:02:38.320 [46/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:38.320 [47/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:38.320 [48/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:38.320 [49/740] Generating lib/rte_cmdline_mingw with a custom command 00:02:38.320 [50/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:38.320 [51/740] Generating lib/rte_metrics_def with a custom command 00:02:38.320 [52/740] Generating lib/rte_metrics_mingw with a custom command 00:02:38.320 [53/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:38.320 [54/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:38.320 [55/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:38.320 [56/740] Generating lib/rte_hash_mingw with a custom command 00:02:38.320 [57/740] Generating lib/rte_hash_def with a custom command 00:02:38.320 [58/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:38.320 [59/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:38.320 [60/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:38.320 [61/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:38.320 [62/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:38.320 [63/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:38.320 [64/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:38.320 [65/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:38.320 [66/740] Generating lib/rte_timer_def with a custom command 00:02:38.320 [67/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:38.320 [68/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:38.320 [69/740] Generating lib/rte_timer_mingw with a custom command 00:02:38.320 [70/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:38.320 [71/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:38.320 [72/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:38.320 [73/740] Generating lib/rte_acl_def with a custom command 00:02:38.320 [74/740] Generating lib/rte_acl_mingw with a custom command 00:02:38.320 [75/740] Generating lib/rte_bbdev_def with a custom command 00:02:38.320 [76/740] Generating lib/rte_bitratestats_def with a custom command 00:02:38.320 [77/740] Linking static target lib/librte_pci.a 00:02:38.320 [78/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:38.320 [79/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:38.320 [80/740] Generating lib/rte_bbdev_mingw with a custom command 00:02:38.320 [81/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:38.320 [82/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:38.320 [83/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:38.320 [84/740] Generating lib/rte_bitratestats_mingw with a custom command 00:02:38.320 [85/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:38.320 [86/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:38.320 [87/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:38.320 [88/740] Generating lib/rte_bpf_def with a custom command 00:02:38.320 [89/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:38.320 [90/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:38.320 [91/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:38.320 [92/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:38.320 [93/740] Generating lib/rte_bpf_mingw with a custom command 00:02:38.320 [94/740] Generating lib/rte_cfgfile_mingw with a custom command 00:02:38.320 [95/740] Linking static target lib/librte_meter.a 00:02:38.320 [96/740] Generating lib/rte_cfgfile_def with a custom command 00:02:38.320 [97/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:38.320 [98/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:38.583 [99/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:38.583 [100/740] Linking static target lib/librte_ring.a 00:02:38.583 [101/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:38.583 [102/740] Generating lib/rte_compressdev_def with a custom command 00:02:38.583 [103/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:38.583 [104/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:38.583 [105/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:38.583 [106/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:38.583 [107/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:38.583 [108/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:38.583 [109/740] Generating lib/rte_cryptodev_def with a custom command 00:02:38.583 [110/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:38.583 [111/740] Generating lib/rte_distributor_mingw with a custom command 00:02:38.583 [112/740] Generating lib/rte_efd_mingw with a custom command 00:02:38.583 [113/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:38.583 [114/740] Generating lib/rte_efd_def with a custom command 00:02:38.583 [115/740] Generating lib/rte_distributor_def with a custom command 00:02:38.583 [116/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:38.583 [117/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:38.583 [118/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:38.583 [119/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:38.583 [120/740] Generating lib/rte_eventdev_def with a custom command 00:02:38.583 [121/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:38.583 [122/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:38.583 [123/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:38.583 [124/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:38.583 [125/740] Generating lib/rte_gpudev_def with a custom command 00:02:38.583 [126/740] Generating lib/rte_gro_def with a custom command 00:02:38.583 [127/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:38.583 [128/740] Generating lib/rte_gro_mingw with a custom command 00:02:38.583 [129/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:38.583 [130/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:38.583 [131/740] Generating lib/rte_gso_def with a custom command 00:02:38.583 [132/740] Generating lib/rte_gso_mingw with a custom command 00:02:38.583 [133/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:38.583 [134/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:38.845 [135/740] Generating lib/rte_ip_frag_def with a custom command 00:02:38.845 [136/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:38.845 [137/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:38.845 [138/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.845 [139/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.845 [140/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:38.845 [141/740] Generating lib/rte_jobstats_def with a custom command 00:02:38.845 [142/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:38.845 [143/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:38.845 [144/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:38.845 [145/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:38.845 [146/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:38.845 [147/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.845 [148/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:38.845 [149/740] Linking target lib/librte_kvargs.so.23.0 00:02:38.845 [150/740] Linking static target lib/librte_cfgfile.a 00:02:38.845 [151/740] Generating lib/rte_latencystats_def with a custom command 00:02:38.845 [152/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:38.845 [153/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:38.845 [154/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:38.845 [155/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:38.845 [156/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:38.845 [157/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:38.845 [158/740] Generating lib/rte_lpm_def with a custom command 00:02:38.845 [159/740] Generating lib/rte_lpm_mingw with a custom command 00:02:38.845 [160/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.845 [161/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:38.845 [162/740] Generating lib/rte_member_mingw with a custom command 00:02:38.845 [163/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:38.845 [164/740] Generating lib/rte_member_def with a custom command 00:02:38.845 [165/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:38.845 [166/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:38.845 [167/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:38.845 [168/740] Generating lib/rte_pcapng_def with a custom command 00:02:38.845 [169/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:38.845 [170/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:38.845 [171/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:38.845 [172/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:38.845 [173/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:38.845 [174/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:38.845 [175/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:38.845 [176/740] Linking static target lib/librte_jobstats.a 00:02:38.845 [177/740] Generating lib/rte_power_def with a custom command 00:02:38.845 [178/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:38.845 [179/740] Generating lib/rte_power_mingw with a custom command 00:02:38.845 [180/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:38.845 [181/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:38.845 [182/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:39.116 [183/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:39.116 [184/740] Linking static target lib/librte_cmdline.a 00:02:39.116 [185/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:39.116 [186/740] Linking static target lib/librte_timer.a 00:02:39.116 [187/740] Linking static target lib/librte_metrics.a 00:02:39.116 [188/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:39.116 [189/740] Generating lib/rte_rawdev_def with a custom command 00:02:39.116 [190/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:39.116 [191/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:39.116 [192/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:39.116 [193/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:39.116 [194/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:39.116 [195/740] Generating lib/rte_regexdev_def with a custom command 00:02:39.116 [196/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:39.116 [197/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:39.116 [198/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:39.116 [199/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:39.116 [200/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:39.116 [201/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:39.116 [202/740] Generating lib/rte_dmadev_def with a custom command 00:02:39.116 [203/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:39.116 [204/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:39.116 [205/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:39.116 [206/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:39.116 [207/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:39.116 [208/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:39.116 [209/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:39.116 [210/740] Linking static target lib/librte_net.a 00:02:39.116 [211/740] Linking static target lib/librte_telemetry.a 00:02:39.116 [212/740] Generating lib/rte_rib_mingw with a custom command 00:02:39.116 [213/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:39.116 [214/740] Generating lib/rte_rib_def with a custom command 00:02:39.116 [215/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:39.116 [216/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:39.116 [217/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:39.116 [218/740] Generating lib/rte_reorder_mingw with a custom command 00:02:39.116 [219/740] Generating lib/rte_reorder_def with a custom command 00:02:39.116 [220/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:39.116 [221/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:39.116 [222/740] Generating lib/rte_sched_def with a custom command 00:02:39.116 [223/740] Generating lib/rte_sched_mingw with a custom command 00:02:39.116 [224/740] Linking static target lib/librte_bitratestats.a 00:02:39.116 [225/740] Generating lib/rte_security_def with a custom command 00:02:39.116 [226/740] Generating lib/rte_security_mingw with a custom command 00:02:39.116 [227/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:39.116 [228/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:39.116 [229/740] Generating lib/rte_stack_def with a custom command 00:02:39.116 [230/740] Generating lib/rte_stack_mingw with a custom command 00:02:39.116 [231/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:39.116 [232/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:39.116 [233/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:39.116 [234/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:39.116 [235/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:39.116 [236/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:39.116 [237/740] Generating lib/rte_vhost_def with a custom command 00:02:39.116 [238/740] Generating lib/rte_vhost_mingw with a custom command 00:02:39.116 [239/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:39.116 [240/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:39.116 [241/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:39.116 [242/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:39.116 [243/740] Generating lib/rte_ipsec_def with a custom command 00:02:39.116 [244/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:39.116 [245/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:39.116 [246/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:39.116 [247/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:39.116 [248/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:39.116 [249/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:39.381 [250/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:39.381 [251/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:39.381 [252/740] Generating lib/rte_fib_mingw with a custom command 00:02:39.381 [253/740] Generating lib/rte_fib_def with a custom command 00:02:39.381 [254/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:39.381 [255/740] Linking static target lib/librte_stack.a 00:02:39.381 [256/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:39.381 [257/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:39.381 [258/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:39.381 [259/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:39.381 [260/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:39.381 [261/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:39.381 [262/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:39.381 [263/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:39.381 [264/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:39.381 [265/740] Generating lib/rte_port_def with a custom command 00:02:39.381 [266/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:39.381 [267/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:39.381 [268/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:39.381 [269/740] Linking static target lib/librte_compressdev.a 00:02:39.381 [270/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:39.381 [271/740] Generating lib/rte_pdump_mingw with a custom command 00:02:39.381 [272/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:39.381 [273/740] Generating lib/rte_port_mingw with a custom command 00:02:39.381 [274/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:39.381 [275/740] Generating lib/rte_pdump_def with a custom command 00:02:39.381 [276/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:39.381 [277/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:39.381 [278/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:39.381 [279/740] Linking static target lib/librte_rcu.a 00:02:39.381 [280/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.381 [281/740] Linking static target lib/librte_mempool.a 00:02:39.381 [282/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:39.381 [283/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:39.381 [284/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:39.381 [285/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.381 [286/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:39.381 [287/740] Linking static target lib/librte_rawdev.a 00:02:39.381 [288/740] Linking static target lib/librte_bbdev.a 00:02:39.381 [289/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:39.645 [290/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.645 [291/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:39.645 [292/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:39.645 [293/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.645 [294/740] Generating lib/rte_table_def with a custom command 00:02:39.645 [295/740] Linking static target lib/librte_gro.a 00:02:39.645 [296/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:39.645 [297/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:39.645 [298/740] Generating lib/rte_table_mingw with a custom command 00:02:39.645 [299/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:39.645 [300/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:39.645 [301/740] Linking static target lib/librte_dmadev.a 00:02:39.645 [302/740] Linking static target lib/librte_gpudev.a 00:02:39.645 [303/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.645 [304/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:39.645 [305/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.645 [306/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:39.645 [307/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:39.645 [308/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.645 [309/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:39.645 [310/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:39.645 [311/740] Linking static target lib/librte_latencystats.a 00:02:39.645 [312/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:39.645 [313/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:39.645 [314/740] Generating lib/rte_pipeline_def with a custom command 00:02:39.645 [315/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:39.645 [316/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:39.645 [317/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:39.645 [318/740] Linking static target lib/librte_gso.a 00:02:39.645 [319/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.645 [320/740] Linking target lib/librte_telemetry.so.23.0 00:02:39.645 [321/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:39.645 [322/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:39.645 [323/740] Generating lib/rte_graph_def with a custom command 00:02:39.645 [324/740] Linking static target lib/librte_distributor.a 00:02:39.645 [325/740] Generating lib/rte_graph_mingw with a custom command 00:02:39.645 [326/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:39.645 [327/740] Linking static target lib/librte_ip_frag.a 00:02:39.913 [328/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:39.913 [329/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:39.913 [330/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:39.913 [331/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:39.913 [332/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:39.913 [333/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:39.913 [334/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:39.913 [335/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:39.913 [336/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:39.913 [337/740] Linking static target lib/librte_regexdev.a 00:02:39.913 [338/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:39.913 [339/740] Generating lib/rte_node_def with a custom command 00:02:39.913 [340/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:39.913 [341/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:39.913 [342/740] Generating lib/rte_node_mingw with a custom command 00:02:39.913 [343/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.913 [344/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:39.913 [345/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:39.913 [346/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:39.913 [347/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:39.913 [348/740] Linking static target lib/librte_power.a 00:02:39.913 [349/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:39.913 [350/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.913 [351/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.913 [352/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:39.913 [353/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:39.913 [354/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:39.914 [355/740] Linking static target lib/librte_reorder.a 00:02:39.914 [356/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:39.914 [357/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:39.914 [358/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:39.914 [359/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:39.914 [360/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.914 [361/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:39.914 [362/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:39.914 [363/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:39.914 [364/740] Linking static target lib/librte_eal.a 00:02:39.914 [365/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:39.914 [366/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:39.914 [367/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:39.914 [368/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:40.181 [369/740] Linking static target lib/librte_security.a 00:02:40.181 [370/740] Linking static target lib/librte_pcapng.a 00:02:40.181 [371/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:40.181 [372/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:40.181 [373/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:40.181 [374/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.181 [375/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:40.181 [376/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:40.181 [377/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:40.181 [378/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:40.181 [379/740] Linking static target lib/librte_mbuf.a 00:02:40.181 [380/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:40.181 [381/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:40.181 [382/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:40.181 [383/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:40.181 [384/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.181 [385/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:40.181 [386/740] Linking static target lib/librte_bpf.a 00:02:40.181 [387/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:40.181 [388/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:40.181 [389/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.181 [390/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:40.181 [391/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:40.181 [392/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:40.181 [393/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:40.181 [394/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:40.181 [395/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:40.181 [396/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:40.181 [397/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:40.446 [398/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:40.446 [399/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:40.446 [400/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:40.446 [401/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:40.446 [402/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:40.446 [403/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:40.446 [404/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:40.446 [405/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:40.446 [406/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.446 [407/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:40.446 [408/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:40.446 [409/740] Linking static target lib/librte_lpm.a 00:02:40.446 [410/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:40.446 [411/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:40.446 [412/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:40.446 [413/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:40.446 [414/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:40.446 [415/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:40.446 [416/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:40.446 [417/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:40.446 [418/740] Linking static target lib/librte_rib.a 00:02:40.446 [419/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.446 [420/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:40.446 [421/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:40.446 [422/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.446 [423/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:40.446 [424/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:40.446 [425/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:40.446 [426/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:40.446 [427/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:40.446 [428/740] Linking static target lib/librte_graph.a 00:02:40.446 [429/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:40.446 [430/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:40.446 [431/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:40.446 [432/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:40.446 [433/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.446 [434/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:40.446 [435/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:40.446 [436/740] Linking static target lib/librte_efd.a 00:02:40.717 [437/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.717 [438/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:40.717 [439/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:40.717 [440/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:40.717 [441/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:40.717 [442/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:40.717 [443/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:40.717 [444/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:40.717 [445/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.717 [446/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.717 [447/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:40.717 [448/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:40.717 [449/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:40.717 [450/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:40.717 [451/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.717 [452/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:40.979 [453/740] Linking static target lib/librte_fib.a 00:02:40.980 [454/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:40.980 [455/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.980 [456/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:40.980 [457/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:40.980 [458/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:40.980 [459/740] Linking static target drivers/librte_bus_vdev.a 00:02:40.980 [460/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:40.980 [461/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:40.980 [462/740] Linking static target lib/librte_pdump.a 00:02:40.980 [463/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.980 [464/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:40.980 [465/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:40.980 [466/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.980 [467/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:40.980 [468/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:40.980 [469/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.980 [470/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:40.980 [471/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:41.246 [472/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:41.246 [473/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.246 [474/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:41.246 [475/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:41.246 [476/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.246 [477/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:41.246 [478/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:41.246 [479/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:41.246 [480/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:41.246 [481/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:41.246 [482/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.246 [483/740] Linking static target drivers/librte_bus_pci.a 00:02:41.246 [484/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:41.246 [485/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:41.246 [486/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:41.246 [487/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:41.246 [488/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:41.246 [489/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:41.246 [490/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:41.246 [491/740] Linking static target lib/librte_table.a 00:02:41.246 [492/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.246 [493/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:41.246 [494/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:41.506 [495/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:41.506 [496/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:41.506 [497/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:41.506 [498/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.506 [499/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:41.506 [500/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:41.506 [501/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.506 [502/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:41.506 [503/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:41.506 [504/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:41.506 [505/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:41.506 [506/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:41.506 [507/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:41.506 [508/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:41.506 [509/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:41.506 [510/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:41.506 [511/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:41.506 [512/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:41.506 [513/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.506 [514/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:41.506 [515/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:41.506 [516/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:41.506 [517/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:41.506 [518/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:41.506 [519/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:41.506 [520/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:41.506 [521/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:41.506 [522/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:41.506 [523/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:41.506 [524/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:41.506 [525/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:41.506 [526/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:41.506 [527/740] Linking static target lib/librte_sched.a 00:02:41.766 [528/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:41.766 [529/740] Linking static target lib/librte_cryptodev.a 00:02:41.766 [530/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:41.766 [531/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:41.766 [532/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:41.766 [533/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:41.766 [534/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:41.766 [535/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.766 [536/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:41.766 [537/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:41.766 [538/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:41.766 [539/740] Linking static target lib/librte_node.a 00:02:41.766 [540/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:41.766 [541/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:41.766 [542/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:41.766 [543/740] Linking static target lib/librte_ipsec.a 00:02:41.766 [544/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:41.766 [545/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:41.766 [546/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.766 [547/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:41.766 [548/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:41.766 [549/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:41.766 [550/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:41.766 [551/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:41.766 [552/740] Linking static target drivers/librte_mempool_ring.a 00:02:41.766 [553/740] Linking static target lib/librte_ethdev.a 00:02:42.026 [554/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:42.026 [555/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:42.026 [556/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:42.026 [557/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:42.026 [558/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:42.026 [559/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:42.026 [560/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:42.026 [561/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:42.026 [562/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:42.026 [563/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:42.026 [564/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:42.026 [565/740] Linking static target lib/librte_port.a 00:02:42.026 [566/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:42.026 [567/740] Linking static target lib/librte_member.a 00:02:42.026 [568/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:42.026 [569/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.026 [570/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:42.026 [571/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:42.026 [572/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:42.026 [573/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:42.026 [574/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:42.026 [575/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:42.026 [576/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:42.026 [577/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:42.026 [578/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:42.026 [579/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:42.026 [580/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.026 [581/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.286 [582/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:42.286 [583/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:42.286 [584/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:42.286 [585/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:42.286 [586/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:42.286 [587/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:42.286 [588/740] Linking static target lib/librte_hash.a 00:02:42.286 [589/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:42.286 [590/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:42.286 [591/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:42.286 [592/740] Linking static target lib/librte_eventdev.a 00:02:42.286 [593/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.286 [594/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:42.286 [595/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:42.286 [596/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:42.286 [597/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:42.286 [598/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:42.286 [599/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:42.286 [600/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:42.544 [601/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:42.544 [602/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:42.544 [603/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.544 [604/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:42.544 [605/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:42.544 [606/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:42.544 [607/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:42.802 [608/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:42.802 [609/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:42.802 [610/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:42.802 [611/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:42.802 [612/740] Linking static target lib/librte_acl.a 00:02:42.803 [613/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.061 [614/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:43.320 [615/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.320 [616/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:43.320 [617/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:43.320 [618/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.581 [619/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:43.841 [620/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:44.101 [621/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:44.678 [622/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:44.678 [623/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:44.939 [624/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:44.939 [625/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:44.939 [626/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:44.939 [627/740] Linking static target drivers/librte_net_i40e.a 00:02:45.199 [628/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:45.458 [629/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.458 [630/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:45.718 [631/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:45.718 [632/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.978 [633/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.263 [634/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.522 [635/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:51.522 [636/740] Linking static target lib/librte_vhost.a 00:02:52.092 [637/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:52.092 [638/740] Linking static target lib/librte_pipeline.a 00:02:52.662 [639/740] Linking target app/dpdk-dumpcap 00:02:52.662 [640/740] Linking target app/dpdk-test-compress-perf 00:02:52.662 [641/740] Linking target app/dpdk-test-cmdline 00:02:52.662 [642/740] Linking target app/dpdk-proc-info 00:02:52.662 [643/740] Linking target app/dpdk-test-bbdev 00:02:52.662 [644/740] Linking target app/dpdk-test-fib 00:02:52.662 [645/740] Linking target app/dpdk-test-acl 00:02:52.662 [646/740] Linking target app/dpdk-pdump 00:02:52.662 [647/740] Linking target app/dpdk-test-gpudev 00:02:52.662 [648/740] Linking target app/dpdk-test-regex 00:02:52.662 [649/740] Linking target app/dpdk-test-pipeline 00:02:52.662 [650/740] Linking target app/dpdk-test-security-perf 00:02:52.662 [651/740] Linking target app/dpdk-test-sad 00:02:52.662 [652/740] Linking target app/dpdk-test-flow-perf 00:02:52.662 [653/740] Linking target app/dpdk-test-eventdev 00:02:52.662 [654/740] Linking target app/dpdk-test-crypto-perf 00:02:52.662 [655/740] Linking target app/dpdk-testpmd 00:02:54.045 [656/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.306 [657/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.306 [658/740] Linking target lib/librte_eal.so.23.0 00:02:54.565 [659/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:54.565 [660/740] Linking target lib/librte_meter.so.23.0 00:02:54.565 [661/740] Linking target lib/librte_dmadev.so.23.0 00:02:54.565 [662/740] Linking target lib/librte_ring.so.23.0 00:02:54.565 [663/740] Linking target lib/librte_pci.so.23.0 00:02:54.565 [664/740] Linking target lib/librte_timer.so.23.0 00:02:54.565 [665/740] Linking target lib/librte_jobstats.so.23.0 00:02:54.565 [666/740] Linking target lib/librte_cfgfile.so.23.0 00:02:54.565 [667/740] Linking target lib/librte_stack.so.23.0 00:02:54.565 [668/740] Linking target lib/librte_rawdev.so.23.0 00:02:54.566 [669/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:54.566 [670/740] Linking target lib/librte_graph.so.23.0 00:02:54.566 [671/740] Linking target lib/librte_acl.so.23.0 00:02:54.826 [672/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:54.826 [673/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:54.826 [674/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:54.826 [675/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:54.826 [676/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:54.826 [677/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:54.826 [678/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:54.826 [679/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:54.826 [680/740] Linking target lib/librte_rcu.so.23.0 00:02:54.826 [681/740] Linking target lib/librte_mempool.so.23.0 00:02:54.826 [682/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:54.826 [683/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:54.826 [684/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:54.826 [685/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:55.086 [686/740] Linking target lib/librte_mbuf.so.23.0 00:02:55.086 [687/740] Linking target lib/librte_rib.so.23.0 00:02:55.086 [688/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:55.086 [689/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:55.086 [690/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:55.086 [691/740] Linking target lib/librte_bbdev.so.23.0 00:02:55.086 [692/740] Linking target lib/librte_net.so.23.0 00:02:55.086 [693/740] Linking target lib/librte_compressdev.so.23.0 00:02:55.086 [694/740] Linking target lib/librte_distributor.so.23.0 00:02:55.086 [695/740] Linking target lib/librte_gpudev.so.23.0 00:02:55.086 [696/740] Linking target lib/librte_regexdev.so.23.0 00:02:55.086 [697/740] Linking target lib/librte_reorder.so.23.0 00:02:55.086 [698/740] Linking target lib/librte_sched.so.23.0 00:02:55.086 [699/740] Linking target lib/librte_cryptodev.so.23.0 00:02:55.086 [700/740] Linking target lib/librte_fib.so.23.0 00:02:55.346 [701/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:55.346 [702/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:55.346 [703/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:55.346 [704/740] Linking target lib/librte_hash.so.23.0 00:02:55.346 [705/740] Linking target lib/librte_cmdline.so.23.0 00:02:55.346 [706/740] Linking target lib/librte_ethdev.so.23.0 00:02:55.346 [707/740] Linking target lib/librte_security.so.23.0 00:02:55.606 [708/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:55.606 [709/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:55.606 [710/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:55.606 [711/740] Linking target lib/librte_efd.so.23.0 00:02:55.606 [712/740] Linking target lib/librte_lpm.so.23.0 00:02:55.606 [713/740] Linking target lib/librte_member.so.23.0 00:02:55.606 [714/740] Linking target lib/librte_metrics.so.23.0 00:02:55.606 [715/740] Linking target lib/librte_ipsec.so.23.0 00:02:55.606 [716/740] Linking target lib/librte_ip_frag.so.23.0 00:02:55.606 [717/740] Linking target lib/librte_bpf.so.23.0 00:02:55.606 [718/740] Linking target lib/librte_pcapng.so.23.0 00:02:55.606 [719/740] Linking target lib/librte_gso.so.23.0 00:02:55.606 [720/740] Linking target lib/librte_power.so.23.0 00:02:55.606 [721/740] Linking target lib/librte_gro.so.23.0 00:02:55.606 [722/740] Linking target lib/librte_eventdev.so.23.0 00:02:55.606 [723/740] Linking target lib/librte_vhost.so.23.0 00:02:55.606 [724/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:55.606 [725/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:55.606 [726/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:55.606 [727/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:55.606 [728/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:55.606 [729/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:55.606 [730/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:55.606 [731/740] Linking target lib/librte_latencystats.so.23.0 00:02:55.866 [732/740] Linking target lib/librte_bitratestats.so.23.0 00:02:55.866 [733/740] Linking target lib/librte_pdump.so.23.0 00:02:55.866 [734/740] Linking target lib/librte_node.so.23.0 00:02:55.866 [735/740] Linking target lib/librte_port.so.23.0 00:02:55.866 [736/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:55.866 [737/740] Linking target lib/librte_table.so.23.0 00:02:56.126 [738/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:57.511 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.511 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:57.511 18:57:31 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:02:57.511 18:57:31 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:57.511 18:57:31 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 install 00:02:57.511 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:02:57.773 [0/1] Installing files. 00:02:57.773 Installing subdir /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.773 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.774 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:57.775 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.776 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.038 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.039 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:58.040 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:58.040 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.040 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.041 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:58.305 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:58.305 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:58.305 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.305 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:58.305 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:58.305 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:58.305 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:58.305 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:58.305 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:58.305 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:58.305 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:58.305 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:58.305 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:58.305 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:58.305 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:58.305 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:58.305 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:58.305 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:58.305 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:58.305 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:58.305 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:58.305 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.305 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.305 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.305 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.306 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.307 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.308 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:02:58.309 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:02:58.309 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:58.309 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:58.309 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:58.309 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:58.309 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:58.309 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:58.309 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:58.309 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:58.309 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:58.309 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:58.309 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:58.309 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:58.309 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:58.309 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:58.309 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:58.309 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so 00:02:58.309 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:58.309 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:58.309 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:58.309 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:58.309 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:58.309 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:58.309 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:58.309 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:58.309 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:58.309 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:58.309 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:58.309 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:58.309 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:58.309 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:58.309 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:58.309 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:58.309 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:58.309 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:58.309 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:58.309 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:58.309 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:58.309 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:58.309 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:58.309 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:58.309 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:58.309 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:58.309 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:58.309 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:58.309 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:58.309 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:58.309 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:58.309 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:58.309 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:58.310 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:58.310 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:58.310 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:58.310 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:58.310 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:58.310 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:58.310 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:58.310 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:58.310 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:58.310 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:58.310 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:58.310 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:58.310 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:58.310 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:58.310 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:58.310 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:58.310 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so 00:02:58.310 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:58.310 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:58.310 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:58.310 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so 00:02:58.310 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:58.310 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:58.310 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:58.310 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:58.310 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:58.310 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:58.310 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:58.310 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:58.310 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:58.310 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:58.310 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:58.310 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:58.310 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:58.310 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so 00:02:58.310 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:58.310 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:58.310 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:58.310 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:58.310 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:58.310 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:58.310 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:58.310 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:58.310 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:58.310 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so 00:02:58.310 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:58.310 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:58.310 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:58.310 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:58.310 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:58.310 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:58.310 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:58.310 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:58.310 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:58.310 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:58.310 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:58.310 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:58.310 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:58.310 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:58.310 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:58.310 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so 00:02:58.310 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:58.310 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:58.310 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:58.310 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:58.310 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:58.310 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so 00:02:58.310 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:58.310 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:58.310 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:58.310 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:58.310 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:58.310 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:58.310 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:58.310 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:58.310 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:58.310 18:57:32 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:02:58.310 18:57:32 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:58.310 00:02:58.310 real 0m27.884s 00:02:58.310 user 6m37.773s 00:02:58.310 sys 2m18.694s 00:02:58.310 18:57:32 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:58.310 18:57:32 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:58.310 ************************************ 00:02:58.310 END TEST build_native_dpdk 00:02:58.310 ************************************ 00:02:58.310 18:57:32 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:58.310 18:57:32 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:58.310 18:57:32 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:58.310 18:57:32 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:58.310 18:57:32 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:58.310 18:57:32 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:58.310 18:57:32 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:58.310 18:57:32 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --with-shared 00:02:58.570 Using /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:58.570 DPDK libraries: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:58.570 DPDK includes: //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:58.830 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:02:59.091 Using 'verbs' RDMA provider 00:03:15.032 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:27.263 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:28.094 Creating mk/config.mk...done. 00:03:28.094 Creating mk/cc.flags.mk...done. 00:03:28.094 Type 'make' to build. 00:03:28.094 18:58:02 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:03:28.094 18:58:02 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:28.094 18:58:02 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:28.094 18:58:02 -- common/autotest_common.sh@10 -- $ set +x 00:03:28.094 ************************************ 00:03:28.094 START TEST make 00:03:28.094 ************************************ 00:03:28.094 18:58:02 make -- common/autotest_common.sh@1129 -- $ make -j112 00:04:00.200 CC lib/log/log.o 00:04:00.200 CC lib/log/log_flags.o 00:04:00.200 CC lib/log/log_deprecated.o 00:04:00.200 CC lib/ut/ut.o 00:04:00.200 CC lib/ut_mock/mock.o 00:04:00.200 LIB libspdk_ut_mock.a 00:04:00.200 LIB libspdk_log.a 00:04:00.200 LIB libspdk_ut.a 00:04:00.200 SO libspdk_ut_mock.so.6.0 00:04:00.200 SO libspdk_log.so.7.1 00:04:00.200 SO libspdk_ut.so.2.0 00:04:00.200 SYMLINK libspdk_ut_mock.so 00:04:00.200 SYMLINK libspdk_ut.so 00:04:00.200 SYMLINK libspdk_log.so 00:04:00.200 CC lib/util/base64.o 00:04:00.200 CC lib/util/bit_array.o 00:04:00.200 CC lib/util/cpuset.o 00:04:00.200 CC lib/util/crc16.o 00:04:00.200 CC lib/util/crc32.o 00:04:00.200 CC lib/util/fd.o 00:04:00.200 CC lib/util/crc32c.o 00:04:00.200 CC lib/util/crc32_ieee.o 00:04:00.200 CC lib/util/crc64.o 00:04:00.200 CC lib/util/dif.o 00:04:00.200 CC lib/util/fd_group.o 00:04:00.200 CC lib/util/file.o 00:04:00.200 CC lib/util/hexlify.o 00:04:00.200 CC lib/util/iov.o 00:04:00.200 CC lib/util/math.o 00:04:00.200 CC lib/ioat/ioat.o 00:04:00.200 CC lib/util/net.o 00:04:00.200 CC lib/util/pipe.o 00:04:00.200 CC lib/util/strerror_tls.o 00:04:00.200 CC lib/util/string.o 00:04:00.200 CC lib/util/uuid.o 00:04:00.200 CC lib/util/xor.o 00:04:00.200 CC lib/util/zipf.o 00:04:00.200 CC lib/dma/dma.o 00:04:00.200 CC lib/util/md5.o 00:04:00.200 CXX lib/trace_parser/trace.o 00:04:00.200 CC lib/vfio_user/host/vfio_user_pci.o 00:04:00.200 CC lib/vfio_user/host/vfio_user.o 00:04:00.200 LIB libspdk_dma.a 00:04:00.200 SO libspdk_dma.so.5.0 00:04:00.200 LIB libspdk_ioat.a 00:04:00.200 SO libspdk_ioat.so.7.0 00:04:00.200 SYMLINK libspdk_dma.so 00:04:00.200 SYMLINK libspdk_ioat.so 00:04:00.200 LIB libspdk_vfio_user.a 00:04:00.200 SO libspdk_vfio_user.so.5.0 00:04:00.200 LIB libspdk_util.a 00:04:00.200 SYMLINK libspdk_vfio_user.so 00:04:00.200 SO libspdk_util.so.10.1 00:04:00.200 SYMLINK libspdk_util.so 00:04:00.200 CC lib/rdma_utils/rdma_utils.o 00:04:00.200 CC lib/json/json_parse.o 00:04:00.200 CC lib/json/json_util.o 00:04:00.200 CC lib/conf/conf.o 00:04:00.200 CC lib/json/json_write.o 00:04:00.200 CC lib/env_dpdk/env.o 00:04:00.200 CC lib/env_dpdk/memory.o 00:04:00.200 CC lib/env_dpdk/pci.o 00:04:00.200 CC lib/env_dpdk/init.o 00:04:00.200 CC lib/env_dpdk/threads.o 00:04:00.200 CC lib/env_dpdk/pci_virtio.o 00:04:00.200 CC lib/vmd/vmd.o 00:04:00.200 CC lib/env_dpdk/pci_ioat.o 00:04:00.200 CC lib/vmd/led.o 00:04:00.200 CC lib/env_dpdk/pci_vmd.o 00:04:00.200 CC lib/env_dpdk/pci_idxd.o 00:04:00.200 CC lib/env_dpdk/pci_event.o 00:04:00.200 CC lib/env_dpdk/sigbus_handler.o 00:04:00.200 CC lib/idxd/idxd.o 00:04:00.200 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:00.200 CC lib/env_dpdk/pci_dpdk.o 00:04:00.200 CC lib/idxd/idxd_user.o 00:04:00.200 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:00.200 CC lib/idxd/idxd_kernel.o 00:04:00.200 LIB libspdk_conf.a 00:04:00.200 LIB libspdk_rdma_utils.a 00:04:00.200 LIB libspdk_json.a 00:04:00.200 SO libspdk_conf.so.6.0 00:04:00.200 SO libspdk_rdma_utils.so.1.0 00:04:00.200 SO libspdk_json.so.6.0 00:04:00.200 SYMLINK libspdk_conf.so 00:04:00.200 SYMLINK libspdk_rdma_utils.so 00:04:00.200 SYMLINK libspdk_json.so 00:04:00.200 LIB libspdk_idxd.a 00:04:00.200 LIB libspdk_trace_parser.a 00:04:00.200 LIB libspdk_vmd.a 00:04:00.200 SO libspdk_idxd.so.12.1 00:04:00.200 SO libspdk_trace_parser.so.6.0 00:04:00.200 SO libspdk_vmd.so.6.0 00:04:00.200 SYMLINK libspdk_idxd.so 00:04:00.200 SYMLINK libspdk_trace_parser.so 00:04:00.200 SYMLINK libspdk_vmd.so 00:04:00.200 CC lib/jsonrpc/jsonrpc_server.o 00:04:00.200 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:00.200 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:00.200 CC lib/jsonrpc/jsonrpc_client.o 00:04:00.200 CC lib/rdma_provider/common.o 00:04:00.200 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:00.200 LIB libspdk_rdma_provider.a 00:04:00.200 LIB libspdk_jsonrpc.a 00:04:00.200 SO libspdk_rdma_provider.so.7.0 00:04:00.200 SO libspdk_jsonrpc.so.6.0 00:04:00.200 LIB libspdk_env_dpdk.a 00:04:00.200 SYMLINK libspdk_rdma_provider.so 00:04:00.200 SYMLINK libspdk_jsonrpc.so 00:04:00.200 SO libspdk_env_dpdk.so.15.1 00:04:00.200 SYMLINK libspdk_env_dpdk.so 00:04:00.460 CC lib/rpc/rpc.o 00:04:00.460 LIB libspdk_rpc.a 00:04:00.460 SO libspdk_rpc.so.6.0 00:04:00.720 SYMLINK libspdk_rpc.so 00:04:00.980 CC lib/notify/notify.o 00:04:00.980 CC lib/notify/notify_rpc.o 00:04:00.980 CC lib/trace/trace.o 00:04:00.980 CC lib/keyring/keyring.o 00:04:00.980 CC lib/trace/trace_flags.o 00:04:00.980 CC lib/keyring/keyring_rpc.o 00:04:00.980 CC lib/trace/trace_rpc.o 00:04:01.240 LIB libspdk_notify.a 00:04:01.240 SO libspdk_notify.so.6.0 00:04:01.240 LIB libspdk_trace.a 00:04:01.240 LIB libspdk_keyring.a 00:04:01.240 SO libspdk_keyring.so.2.0 00:04:01.240 SO libspdk_trace.so.11.0 00:04:01.240 SYMLINK libspdk_notify.so 00:04:01.500 SYMLINK libspdk_keyring.so 00:04:01.500 SYMLINK libspdk_trace.so 00:04:01.760 CC lib/thread/thread.o 00:04:01.760 CC lib/thread/iobuf.o 00:04:01.760 CC lib/sock/sock.o 00:04:01.760 CC lib/sock/sock_rpc.o 00:04:02.020 LIB libspdk_sock.a 00:04:02.280 SO libspdk_sock.so.10.0 00:04:02.280 SYMLINK libspdk_sock.so 00:04:02.541 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:02.541 CC lib/nvme/nvme_ctrlr.o 00:04:02.541 CC lib/nvme/nvme_fabric.o 00:04:02.541 CC lib/nvme/nvme_ns_cmd.o 00:04:02.541 CC lib/nvme/nvme_ns.o 00:04:02.541 CC lib/nvme/nvme_pcie_common.o 00:04:02.541 CC lib/nvme/nvme_pcie.o 00:04:02.541 CC lib/nvme/nvme_qpair.o 00:04:02.541 CC lib/nvme/nvme.o 00:04:02.541 CC lib/nvme/nvme_quirks.o 00:04:02.541 CC lib/nvme/nvme_transport.o 00:04:02.541 CC lib/nvme/nvme_discovery.o 00:04:02.541 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:02.800 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:02.800 CC lib/nvme/nvme_tcp.o 00:04:02.800 CC lib/nvme/nvme_opal.o 00:04:02.800 CC lib/nvme/nvme_io_msg.o 00:04:02.800 CC lib/nvme/nvme_zns.o 00:04:02.800 CC lib/nvme/nvme_poll_group.o 00:04:02.800 CC lib/nvme/nvme_stubs.o 00:04:02.800 CC lib/nvme/nvme_auth.o 00:04:02.800 CC lib/nvme/nvme_cuse.o 00:04:02.800 CC lib/nvme/nvme_rdma.o 00:04:02.800 LIB libspdk_thread.a 00:04:02.800 SO libspdk_thread.so.11.0 00:04:03.060 SYMLINK libspdk_thread.so 00:04:03.318 CC lib/blob/blobstore.o 00:04:03.318 CC lib/blob/request.o 00:04:03.318 CC lib/blob/zeroes.o 00:04:03.318 CC lib/blob/blob_bs_dev.o 00:04:03.318 CC lib/fsdev/fsdev.o 00:04:03.318 CC lib/fsdev/fsdev_io.o 00:04:03.318 CC lib/init/json_config.o 00:04:03.318 CC lib/init/subsystem.o 00:04:03.318 CC lib/fsdev/fsdev_rpc.o 00:04:03.318 CC lib/init/subsystem_rpc.o 00:04:03.318 CC lib/virtio/virtio.o 00:04:03.318 CC lib/accel/accel.o 00:04:03.318 CC lib/init/rpc.o 00:04:03.318 CC lib/accel/accel_rpc.o 00:04:03.318 CC lib/virtio/virtio_vhost_user.o 00:04:03.318 CC lib/accel/accel_sw.o 00:04:03.318 CC lib/virtio/virtio_vfio_user.o 00:04:03.318 CC lib/virtio/virtio_pci.o 00:04:03.578 LIB libspdk_init.a 00:04:03.578 LIB libspdk_virtio.a 00:04:03.578 SO libspdk_init.so.6.0 00:04:03.578 SO libspdk_virtio.so.7.0 00:04:03.838 SYMLINK libspdk_init.so 00:04:03.838 SYMLINK libspdk_virtio.so 00:04:03.838 LIB libspdk_fsdev.a 00:04:03.838 SO libspdk_fsdev.so.2.0 00:04:04.098 SYMLINK libspdk_fsdev.so 00:04:04.098 CC lib/event/app.o 00:04:04.098 CC lib/event/reactor.o 00:04:04.098 CC lib/event/log_rpc.o 00:04:04.098 CC lib/event/app_rpc.o 00:04:04.098 CC lib/event/scheduler_static.o 00:04:04.098 LIB libspdk_accel.a 00:04:04.358 SO libspdk_accel.so.16.0 00:04:04.358 LIB libspdk_nvme.a 00:04:04.358 SYMLINK libspdk_accel.so 00:04:04.358 SO libspdk_nvme.so.15.0 00:04:04.358 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:04.358 LIB libspdk_event.a 00:04:04.618 SO libspdk_event.so.14.0 00:04:04.618 SYMLINK libspdk_nvme.so 00:04:04.618 SYMLINK libspdk_event.so 00:04:04.618 CC lib/bdev/bdev.o 00:04:04.618 CC lib/bdev/bdev_rpc.o 00:04:04.618 CC lib/bdev/bdev_zone.o 00:04:04.618 CC lib/bdev/part.o 00:04:04.618 CC lib/bdev/scsi_nvme.o 00:04:04.879 LIB libspdk_fuse_dispatcher.a 00:04:04.879 SO libspdk_fuse_dispatcher.so.1.0 00:04:04.879 SYMLINK libspdk_fuse_dispatcher.so 00:04:05.449 LIB libspdk_blob.a 00:04:05.449 SO libspdk_blob.so.12.0 00:04:05.709 SYMLINK libspdk_blob.so 00:04:05.969 CC lib/blobfs/blobfs.o 00:04:05.969 CC lib/blobfs/tree.o 00:04:05.969 CC lib/lvol/lvol.o 00:04:06.539 LIB libspdk_bdev.a 00:04:06.539 LIB libspdk_blobfs.a 00:04:06.539 SO libspdk_blobfs.so.11.0 00:04:06.800 SO libspdk_bdev.so.17.0 00:04:06.800 LIB libspdk_lvol.a 00:04:06.800 SYMLINK libspdk_blobfs.so 00:04:06.800 SO libspdk_lvol.so.11.0 00:04:06.800 SYMLINK libspdk_bdev.so 00:04:06.800 SYMLINK libspdk_lvol.so 00:04:07.061 CC lib/nbd/nbd.o 00:04:07.061 CC lib/nbd/nbd_rpc.o 00:04:07.061 CC lib/nvmf/ctrlr.o 00:04:07.061 CC lib/ublk/ublk.o 00:04:07.061 CC lib/ublk/ublk_rpc.o 00:04:07.061 CC lib/nvmf/ctrlr_discovery.o 00:04:07.061 CC lib/nvmf/ctrlr_bdev.o 00:04:07.061 CC lib/nvmf/subsystem.o 00:04:07.061 CC lib/nvmf/nvmf.o 00:04:07.061 CC lib/ftl/ftl_core.o 00:04:07.061 CC lib/scsi/dev.o 00:04:07.061 CC lib/nvmf/nvmf_rpc.o 00:04:07.061 CC lib/ftl/ftl_init.o 00:04:07.061 CC lib/scsi/lun.o 00:04:07.061 CC lib/scsi/port.o 00:04:07.061 CC lib/nvmf/transport.o 00:04:07.061 CC lib/ftl/ftl_layout.o 00:04:07.061 CC lib/ftl/ftl_debug.o 00:04:07.321 CC lib/scsi/scsi.o 00:04:07.321 CC lib/nvmf/tcp.o 00:04:07.321 CC lib/scsi/scsi_bdev.o 00:04:07.321 CC lib/ftl/ftl_io.o 00:04:07.321 CC lib/nvmf/stubs.o 00:04:07.321 CC lib/ftl/ftl_sb.o 00:04:07.321 CC lib/scsi/scsi_pr.o 00:04:07.321 CC lib/nvmf/mdns_server.o 00:04:07.321 CC lib/ftl/ftl_l2p.o 00:04:07.321 CC lib/scsi/scsi_rpc.o 00:04:07.321 CC lib/nvmf/rdma.o 00:04:07.321 CC lib/ftl/ftl_l2p_flat.o 00:04:07.321 CC lib/ftl/ftl_nv_cache.o 00:04:07.321 CC lib/nvmf/auth.o 00:04:07.321 CC lib/scsi/task.o 00:04:07.321 CC lib/ftl/ftl_band.o 00:04:07.321 CC lib/ftl/ftl_band_ops.o 00:04:07.321 CC lib/ftl/ftl_writer.o 00:04:07.321 CC lib/ftl/ftl_rq.o 00:04:07.321 CC lib/ftl/ftl_reloc.o 00:04:07.321 CC lib/ftl/ftl_l2p_cache.o 00:04:07.321 CC lib/ftl/ftl_p2l.o 00:04:07.321 CC lib/ftl/ftl_p2l_log.o 00:04:07.321 CC lib/ftl/mngt/ftl_mngt.o 00:04:07.321 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:07.321 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:07.321 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:07.321 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:07.321 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:07.321 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:07.321 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:07.321 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:07.321 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:07.321 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:07.321 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:07.321 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:07.321 CC lib/ftl/utils/ftl_conf.o 00:04:07.321 CC lib/ftl/utils/ftl_md.o 00:04:07.321 CC lib/ftl/utils/ftl_mempool.o 00:04:07.321 CC lib/ftl/utils/ftl_bitmap.o 00:04:07.321 CC lib/ftl/utils/ftl_property.o 00:04:07.321 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:07.321 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:07.321 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:07.321 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:07.321 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:07.321 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:07.321 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:07.321 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:07.322 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:07.322 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:07.322 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:07.322 CC lib/ftl/base/ftl_base_dev.o 00:04:07.322 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:07.322 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:07.322 CC lib/ftl/ftl_trace.o 00:04:07.322 CC lib/ftl/base/ftl_base_bdev.o 00:04:07.891 LIB libspdk_nbd.a 00:04:07.891 SO libspdk_nbd.so.7.0 00:04:07.891 LIB libspdk_scsi.a 00:04:07.891 SYMLINK libspdk_nbd.so 00:04:07.891 SO libspdk_scsi.so.9.0 00:04:07.891 SYMLINK libspdk_scsi.so 00:04:07.891 LIB libspdk_ublk.a 00:04:07.891 SO libspdk_ublk.so.3.0 00:04:08.151 SYMLINK libspdk_ublk.so 00:04:08.151 LIB libspdk_ftl.a 00:04:08.151 CC lib/vhost/vhost.o 00:04:08.151 CC lib/vhost/vhost_blk.o 00:04:08.151 CC lib/vhost/vhost_rpc.o 00:04:08.151 CC lib/vhost/vhost_scsi.o 00:04:08.151 CC lib/vhost/rte_vhost_user.o 00:04:08.411 CC lib/iscsi/conn.o 00:04:08.411 CC lib/iscsi/init_grp.o 00:04:08.411 CC lib/iscsi/iscsi.o 00:04:08.411 CC lib/iscsi/param.o 00:04:08.411 CC lib/iscsi/tgt_node.o 00:04:08.411 CC lib/iscsi/portal_grp.o 00:04:08.411 CC lib/iscsi/iscsi_subsystem.o 00:04:08.411 CC lib/iscsi/iscsi_rpc.o 00:04:08.411 CC lib/iscsi/task.o 00:04:08.412 SO libspdk_ftl.so.9.0 00:04:08.672 SYMLINK libspdk_ftl.so 00:04:08.933 LIB libspdk_nvmf.a 00:04:08.933 SO libspdk_nvmf.so.20.0 00:04:08.933 SYMLINK libspdk_nvmf.so 00:04:09.193 LIB libspdk_vhost.a 00:04:09.193 SO libspdk_vhost.so.8.0 00:04:09.193 SYMLINK libspdk_vhost.so 00:04:09.193 LIB libspdk_iscsi.a 00:04:09.453 SO libspdk_iscsi.so.8.0 00:04:09.453 SYMLINK libspdk_iscsi.so 00:04:10.024 CC module/env_dpdk/env_dpdk_rpc.o 00:04:10.284 LIB libspdk_env_dpdk_rpc.a 00:04:10.284 CC module/blob/bdev/blob_bdev.o 00:04:10.284 SO libspdk_env_dpdk_rpc.so.6.0 00:04:10.284 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:10.284 CC module/keyring/file/keyring.o 00:04:10.284 CC module/keyring/file/keyring_rpc.o 00:04:10.284 CC module/keyring/linux/keyring.o 00:04:10.284 CC module/keyring/linux/keyring_rpc.o 00:04:10.284 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:10.284 CC module/accel/iaa/accel_iaa.o 00:04:10.284 CC module/accel/ioat/accel_ioat.o 00:04:10.284 CC module/accel/iaa/accel_iaa_rpc.o 00:04:10.284 CC module/accel/ioat/accel_ioat_rpc.o 00:04:10.284 CC module/fsdev/aio/fsdev_aio.o 00:04:10.284 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:10.284 CC module/fsdev/aio/linux_aio_mgr.o 00:04:10.284 CC module/accel/dsa/accel_dsa.o 00:04:10.284 CC module/accel/dsa/accel_dsa_rpc.o 00:04:10.284 CC module/scheduler/gscheduler/gscheduler.o 00:04:10.284 CC module/sock/posix/posix.o 00:04:10.284 CC module/accel/error/accel_error.o 00:04:10.284 CC module/accel/error/accel_error_rpc.o 00:04:10.284 SYMLINK libspdk_env_dpdk_rpc.so 00:04:10.545 LIB libspdk_scheduler_dynamic.a 00:04:10.545 LIB libspdk_keyring_file.a 00:04:10.545 LIB libspdk_scheduler_dpdk_governor.a 00:04:10.545 LIB libspdk_keyring_linux.a 00:04:10.545 LIB libspdk_scheduler_gscheduler.a 00:04:10.545 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:10.545 SO libspdk_scheduler_dynamic.so.4.0 00:04:10.545 SO libspdk_keyring_file.so.2.0 00:04:10.545 SO libspdk_keyring_linux.so.1.0 00:04:10.545 SO libspdk_scheduler_gscheduler.so.4.0 00:04:10.545 LIB libspdk_accel_ioat.a 00:04:10.545 LIB libspdk_accel_iaa.a 00:04:10.545 LIB libspdk_accel_error.a 00:04:10.545 LIB libspdk_blob_bdev.a 00:04:10.545 SO libspdk_accel_ioat.so.6.0 00:04:10.545 SYMLINK libspdk_scheduler_dynamic.so 00:04:10.545 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:10.545 SO libspdk_accel_iaa.so.3.0 00:04:10.545 SO libspdk_accel_error.so.2.0 00:04:10.545 SYMLINK libspdk_keyring_linux.so 00:04:10.545 SYMLINK libspdk_scheduler_gscheduler.so 00:04:10.545 LIB libspdk_accel_dsa.a 00:04:10.545 SYMLINK libspdk_keyring_file.so 00:04:10.545 SO libspdk_blob_bdev.so.12.0 00:04:10.545 SO libspdk_accel_dsa.so.5.0 00:04:10.545 SYMLINK libspdk_accel_ioat.so 00:04:10.545 SYMLINK libspdk_accel_iaa.so 00:04:10.545 SYMLINK libspdk_accel_error.so 00:04:10.811 SYMLINK libspdk_blob_bdev.so 00:04:10.811 SYMLINK libspdk_accel_dsa.so 00:04:10.811 LIB libspdk_fsdev_aio.a 00:04:10.811 LIB libspdk_sock_posix.a 00:04:10.811 SO libspdk_fsdev_aio.so.1.0 00:04:11.081 SO libspdk_sock_posix.so.6.0 00:04:11.081 SYMLINK libspdk_fsdev_aio.so 00:04:11.081 SYMLINK libspdk_sock_posix.so 00:04:11.341 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:11.341 CC module/bdev/delay/vbdev_delay.o 00:04:11.341 CC module/blobfs/bdev/blobfs_bdev.o 00:04:11.341 CC module/bdev/ftl/bdev_ftl.o 00:04:11.341 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:11.341 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:11.341 CC module/bdev/lvol/vbdev_lvol.o 00:04:11.341 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:11.341 CC module/bdev/gpt/gpt.o 00:04:11.341 CC module/bdev/error/vbdev_error.o 00:04:11.341 CC module/bdev/gpt/vbdev_gpt.o 00:04:11.341 CC module/bdev/error/vbdev_error_rpc.o 00:04:11.341 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:11.341 CC module/bdev/nvme/bdev_nvme.o 00:04:11.341 CC module/bdev/raid/bdev_raid.o 00:04:11.341 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:11.341 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:11.341 CC module/bdev/malloc/bdev_malloc.o 00:04:11.341 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:11.341 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:11.341 CC module/bdev/raid/bdev_raid_rpc.o 00:04:11.341 CC module/bdev/nvme/nvme_rpc.o 00:04:11.341 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:11.341 CC module/bdev/aio/bdev_aio.o 00:04:11.341 CC module/bdev/iscsi/bdev_iscsi.o 00:04:11.341 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:11.341 CC module/bdev/raid/bdev_raid_sb.o 00:04:11.341 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:11.341 CC module/bdev/null/bdev_null.o 00:04:11.341 CC module/bdev/nvme/bdev_mdns_client.o 00:04:11.341 CC module/bdev/raid/raid0.o 00:04:11.341 CC module/bdev/aio/bdev_aio_rpc.o 00:04:11.341 CC module/bdev/nvme/vbdev_opal.o 00:04:11.341 CC module/bdev/split/vbdev_split.o 00:04:11.341 CC module/bdev/null/bdev_null_rpc.o 00:04:11.341 CC module/bdev/raid/raid1.o 00:04:11.341 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:11.341 CC module/bdev/split/vbdev_split_rpc.o 00:04:11.341 CC module/bdev/raid/concat.o 00:04:11.341 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:11.341 CC module/bdev/passthru/vbdev_passthru.o 00:04:11.341 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:11.600 LIB libspdk_blobfs_bdev.a 00:04:11.600 SO libspdk_blobfs_bdev.so.6.0 00:04:11.600 LIB libspdk_bdev_gpt.a 00:04:11.600 LIB libspdk_bdev_error.a 00:04:11.600 LIB libspdk_bdev_ftl.a 00:04:11.600 LIB libspdk_bdev_split.a 00:04:11.600 SYMLINK libspdk_blobfs_bdev.so 00:04:11.600 LIB libspdk_bdev_null.a 00:04:11.600 SO libspdk_bdev_gpt.so.6.0 00:04:11.600 SO libspdk_bdev_error.so.6.0 00:04:11.600 SO libspdk_bdev_ftl.so.6.0 00:04:11.600 LIB libspdk_bdev_passthru.a 00:04:11.600 SO libspdk_bdev_split.so.6.0 00:04:11.600 LIB libspdk_bdev_zone_block.a 00:04:11.600 SO libspdk_bdev_null.so.6.0 00:04:11.600 LIB libspdk_bdev_aio.a 00:04:11.600 LIB libspdk_bdev_delay.a 00:04:11.600 SO libspdk_bdev_passthru.so.6.0 00:04:11.600 LIB libspdk_bdev_iscsi.a 00:04:11.600 SYMLINK libspdk_bdev_error.so 00:04:11.600 SYMLINK libspdk_bdev_gpt.so 00:04:11.600 SO libspdk_bdev_zone_block.so.6.0 00:04:11.600 SYMLINK libspdk_bdev_ftl.so 00:04:11.600 SO libspdk_bdev_aio.so.6.0 00:04:11.600 LIB libspdk_bdev_malloc.a 00:04:11.600 SO libspdk_bdev_delay.so.6.0 00:04:11.600 SYMLINK libspdk_bdev_split.so 00:04:11.860 SO libspdk_bdev_iscsi.so.6.0 00:04:11.860 SYMLINK libspdk_bdev_null.so 00:04:11.860 SO libspdk_bdev_malloc.so.6.0 00:04:11.860 SYMLINK libspdk_bdev_passthru.so 00:04:11.860 SYMLINK libspdk_bdev_zone_block.so 00:04:11.860 SYMLINK libspdk_bdev_delay.so 00:04:11.860 SYMLINK libspdk_bdev_aio.so 00:04:11.860 LIB libspdk_bdev_lvol.a 00:04:11.860 LIB libspdk_bdev_virtio.a 00:04:11.860 SYMLINK libspdk_bdev_iscsi.so 00:04:11.860 SYMLINK libspdk_bdev_malloc.so 00:04:11.860 SO libspdk_bdev_lvol.so.6.0 00:04:11.860 SO libspdk_bdev_virtio.so.6.0 00:04:11.860 SYMLINK libspdk_bdev_lvol.so 00:04:11.860 SYMLINK libspdk_bdev_virtio.so 00:04:12.120 LIB libspdk_bdev_raid.a 00:04:12.120 SO libspdk_bdev_raid.so.6.0 00:04:12.381 SYMLINK libspdk_bdev_raid.so 00:04:13.335 LIB libspdk_bdev_nvme.a 00:04:13.335 SO libspdk_bdev_nvme.so.7.1 00:04:13.335 SYMLINK libspdk_bdev_nvme.so 00:04:14.278 CC module/event/subsystems/iobuf/iobuf.o 00:04:14.278 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:14.278 CC module/event/subsystems/vmd/vmd.o 00:04:14.278 CC module/event/subsystems/sock/sock.o 00:04:14.278 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:14.278 CC module/event/subsystems/keyring/keyring.o 00:04:14.278 CC module/event/subsystems/scheduler/scheduler.o 00:04:14.278 CC module/event/subsystems/fsdev/fsdev.o 00:04:14.278 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:14.278 LIB libspdk_event_keyring.a 00:04:14.278 LIB libspdk_event_iobuf.a 00:04:14.278 LIB libspdk_event_sock.a 00:04:14.278 LIB libspdk_event_scheduler.a 00:04:14.278 LIB libspdk_event_vmd.a 00:04:14.278 LIB libspdk_event_vhost_blk.a 00:04:14.278 LIB libspdk_event_fsdev.a 00:04:14.278 SO libspdk_event_keyring.so.1.0 00:04:14.278 SO libspdk_event_scheduler.so.4.0 00:04:14.278 SO libspdk_event_sock.so.5.0 00:04:14.278 SO libspdk_event_fsdev.so.1.0 00:04:14.278 SO libspdk_event_iobuf.so.3.0 00:04:14.278 SO libspdk_event_vhost_blk.so.3.0 00:04:14.278 SO libspdk_event_vmd.so.6.0 00:04:14.539 SYMLINK libspdk_event_keyring.so 00:04:14.539 SYMLINK libspdk_event_scheduler.so 00:04:14.539 SYMLINK libspdk_event_fsdev.so 00:04:14.539 SYMLINK libspdk_event_sock.so 00:04:14.539 SYMLINK libspdk_event_vhost_blk.so 00:04:14.539 SYMLINK libspdk_event_iobuf.so 00:04:14.539 SYMLINK libspdk_event_vmd.so 00:04:14.800 CC module/event/subsystems/accel/accel.o 00:04:15.060 LIB libspdk_event_accel.a 00:04:15.060 SO libspdk_event_accel.so.6.0 00:04:15.060 SYMLINK libspdk_event_accel.so 00:04:15.631 CC module/event/subsystems/bdev/bdev.o 00:04:15.631 LIB libspdk_event_bdev.a 00:04:15.631 SO libspdk_event_bdev.so.6.0 00:04:15.891 SYMLINK libspdk_event_bdev.so 00:04:16.151 CC module/event/subsystems/scsi/scsi.o 00:04:16.151 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:16.151 CC module/event/subsystems/ublk/ublk.o 00:04:16.151 CC module/event/subsystems/nbd/nbd.o 00:04:16.151 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:16.412 LIB libspdk_event_nbd.a 00:04:16.412 LIB libspdk_event_ublk.a 00:04:16.412 LIB libspdk_event_scsi.a 00:04:16.412 SO libspdk_event_ublk.so.3.0 00:04:16.412 SO libspdk_event_nbd.so.6.0 00:04:16.412 SO libspdk_event_scsi.so.6.0 00:04:16.412 LIB libspdk_event_nvmf.a 00:04:16.412 SYMLINK libspdk_event_nbd.so 00:04:16.412 SO libspdk_event_nvmf.so.6.0 00:04:16.412 SYMLINK libspdk_event_scsi.so 00:04:16.412 SYMLINK libspdk_event_ublk.so 00:04:16.412 SYMLINK libspdk_event_nvmf.so 00:04:16.983 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:16.983 CC module/event/subsystems/iscsi/iscsi.o 00:04:16.983 LIB libspdk_event_vhost_scsi.a 00:04:16.983 LIB libspdk_event_iscsi.a 00:04:16.983 SO libspdk_event_vhost_scsi.so.3.0 00:04:16.983 SO libspdk_event_iscsi.so.6.0 00:04:17.243 SYMLINK libspdk_event_vhost_scsi.so 00:04:17.243 SYMLINK libspdk_event_iscsi.so 00:04:17.504 SO libspdk.so.6.0 00:04:17.504 SYMLINK libspdk.so 00:04:17.764 CC app/spdk_top/spdk_top.o 00:04:17.764 CXX app/trace/trace.o 00:04:17.764 CC app/spdk_lspci/spdk_lspci.o 00:04:17.764 CC app/spdk_nvme_identify/identify.o 00:04:17.764 CC app/spdk_nvme_discover/discovery_aer.o 00:04:17.764 CC app/trace_record/trace_record.o 00:04:17.764 CC test/rpc_client/rpc_client_test.o 00:04:17.764 TEST_HEADER include/spdk/accel.h 00:04:17.764 TEST_HEADER include/spdk/accel_module.h 00:04:17.764 TEST_HEADER include/spdk/barrier.h 00:04:17.764 TEST_HEADER include/spdk/assert.h 00:04:17.764 TEST_HEADER include/spdk/base64.h 00:04:17.764 TEST_HEADER include/spdk/bdev_module.h 00:04:17.764 TEST_HEADER include/spdk/bdev.h 00:04:17.764 TEST_HEADER include/spdk/bdev_zone.h 00:04:17.764 TEST_HEADER include/spdk/bit_pool.h 00:04:17.764 TEST_HEADER include/spdk/bit_array.h 00:04:17.764 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:17.764 TEST_HEADER include/spdk/blob_bdev.h 00:04:17.764 TEST_HEADER include/spdk/blobfs.h 00:04:17.764 CC app/spdk_nvme_perf/perf.o 00:04:17.764 TEST_HEADER include/spdk/conf.h 00:04:17.764 TEST_HEADER include/spdk/blob.h 00:04:17.764 TEST_HEADER include/spdk/config.h 00:04:17.764 TEST_HEADER include/spdk/cpuset.h 00:04:17.764 TEST_HEADER include/spdk/crc32.h 00:04:17.764 TEST_HEADER include/spdk/crc64.h 00:04:17.764 TEST_HEADER include/spdk/crc16.h 00:04:17.764 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:17.764 TEST_HEADER include/spdk/dif.h 00:04:17.764 TEST_HEADER include/spdk/env.h 00:04:17.764 TEST_HEADER include/spdk/dma.h 00:04:17.764 TEST_HEADER include/spdk/endian.h 00:04:17.764 TEST_HEADER include/spdk/env_dpdk.h 00:04:17.764 TEST_HEADER include/spdk/fd_group.h 00:04:17.764 TEST_HEADER include/spdk/fd.h 00:04:17.764 TEST_HEADER include/spdk/file.h 00:04:17.764 TEST_HEADER include/spdk/event.h 00:04:17.764 TEST_HEADER include/spdk/ftl.h 00:04:17.764 TEST_HEADER include/spdk/fsdev.h 00:04:17.764 TEST_HEADER include/spdk/fsdev_module.h 00:04:17.764 TEST_HEADER include/spdk/gpt_spec.h 00:04:17.764 TEST_HEADER include/spdk/hexlify.h 00:04:17.764 TEST_HEADER include/spdk/histogram_data.h 00:04:17.764 TEST_HEADER include/spdk/idxd.h 00:04:17.764 TEST_HEADER include/spdk/idxd_spec.h 00:04:17.764 TEST_HEADER include/spdk/init.h 00:04:17.764 CC app/iscsi_tgt/iscsi_tgt.o 00:04:17.764 TEST_HEADER include/spdk/ioat.h 00:04:17.764 TEST_HEADER include/spdk/ioat_spec.h 00:04:17.764 TEST_HEADER include/spdk/json.h 00:04:17.764 TEST_HEADER include/spdk/jsonrpc.h 00:04:17.764 TEST_HEADER include/spdk/iscsi_spec.h 00:04:17.764 TEST_HEADER include/spdk/log.h 00:04:17.764 TEST_HEADER include/spdk/keyring.h 00:04:17.764 TEST_HEADER include/spdk/likely.h 00:04:17.764 CC app/nvmf_tgt/nvmf_main.o 00:04:17.764 TEST_HEADER include/spdk/keyring_module.h 00:04:17.764 TEST_HEADER include/spdk/lvol.h 00:04:17.764 TEST_HEADER include/spdk/md5.h 00:04:17.764 TEST_HEADER include/spdk/memory.h 00:04:17.764 TEST_HEADER include/spdk/mmio.h 00:04:18.031 TEST_HEADER include/spdk/nbd.h 00:04:18.031 TEST_HEADER include/spdk/notify.h 00:04:18.031 TEST_HEADER include/spdk/nvme.h 00:04:18.031 TEST_HEADER include/spdk/net.h 00:04:18.031 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:18.031 TEST_HEADER include/spdk/nvme_intel.h 00:04:18.031 CC app/spdk_dd/spdk_dd.o 00:04:18.031 TEST_HEADER include/spdk/nvme_spec.h 00:04:18.031 TEST_HEADER include/spdk/nvme_zns.h 00:04:18.031 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:18.031 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:18.031 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:18.031 TEST_HEADER include/spdk/nvmf.h 00:04:18.031 TEST_HEADER include/spdk/nvmf_spec.h 00:04:18.031 TEST_HEADER include/spdk/nvmf_transport.h 00:04:18.031 TEST_HEADER include/spdk/pci_ids.h 00:04:18.031 TEST_HEADER include/spdk/opal.h 00:04:18.031 TEST_HEADER include/spdk/opal_spec.h 00:04:18.031 TEST_HEADER include/spdk/pipe.h 00:04:18.031 TEST_HEADER include/spdk/queue.h 00:04:18.031 TEST_HEADER include/spdk/reduce.h 00:04:18.031 TEST_HEADER include/spdk/scheduler.h 00:04:18.031 TEST_HEADER include/spdk/scsi.h 00:04:18.031 TEST_HEADER include/spdk/scsi_spec.h 00:04:18.031 TEST_HEADER include/spdk/rpc.h 00:04:18.031 TEST_HEADER include/spdk/stdinc.h 00:04:18.031 TEST_HEADER include/spdk/sock.h 00:04:18.031 TEST_HEADER include/spdk/string.h 00:04:18.031 TEST_HEADER include/spdk/trace.h 00:04:18.031 TEST_HEADER include/spdk/trace_parser.h 00:04:18.031 TEST_HEADER include/spdk/thread.h 00:04:18.031 TEST_HEADER include/spdk/ublk.h 00:04:18.031 TEST_HEADER include/spdk/util.h 00:04:18.031 TEST_HEADER include/spdk/tree.h 00:04:18.031 TEST_HEADER include/spdk/uuid.h 00:04:18.031 TEST_HEADER include/spdk/version.h 00:04:18.031 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:18.031 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:18.031 TEST_HEADER include/spdk/vhost.h 00:04:18.031 TEST_HEADER include/spdk/zipf.h 00:04:18.031 TEST_HEADER include/spdk/xor.h 00:04:18.031 TEST_HEADER include/spdk/vmd.h 00:04:18.031 CXX test/cpp_headers/accel_module.o 00:04:18.031 CXX test/cpp_headers/accel.o 00:04:18.031 CXX test/cpp_headers/assert.o 00:04:18.031 CC app/spdk_tgt/spdk_tgt.o 00:04:18.031 CXX test/cpp_headers/base64.o 00:04:18.031 CXX test/cpp_headers/barrier.o 00:04:18.031 CXX test/cpp_headers/bdev_module.o 00:04:18.031 CXX test/cpp_headers/bdev.o 00:04:18.031 CXX test/cpp_headers/bit_array.o 00:04:18.031 CXX test/cpp_headers/bit_pool.o 00:04:18.031 CXX test/cpp_headers/bdev_zone.o 00:04:18.031 CXX test/cpp_headers/blob_bdev.o 00:04:18.031 CXX test/cpp_headers/blobfs_bdev.o 00:04:18.031 CXX test/cpp_headers/blobfs.o 00:04:18.031 CXX test/cpp_headers/blob.o 00:04:18.031 CXX test/cpp_headers/cpuset.o 00:04:18.031 CXX test/cpp_headers/config.o 00:04:18.031 CXX test/cpp_headers/conf.o 00:04:18.031 CXX test/cpp_headers/crc16.o 00:04:18.031 CXX test/cpp_headers/crc32.o 00:04:18.031 CXX test/cpp_headers/crc64.o 00:04:18.031 CXX test/cpp_headers/dma.o 00:04:18.031 CXX test/cpp_headers/dif.o 00:04:18.031 CXX test/cpp_headers/endian.o 00:04:18.031 CXX test/cpp_headers/event.o 00:04:18.031 CXX test/cpp_headers/env_dpdk.o 00:04:18.031 CXX test/cpp_headers/env.o 00:04:18.031 CXX test/cpp_headers/fd_group.o 00:04:18.031 CXX test/cpp_headers/file.o 00:04:18.031 CXX test/cpp_headers/fd.o 00:04:18.031 CXX test/cpp_headers/fsdev.o 00:04:18.031 CXX test/cpp_headers/fsdev_module.o 00:04:18.031 CXX test/cpp_headers/ftl.o 00:04:18.031 CXX test/cpp_headers/gpt_spec.o 00:04:18.031 CXX test/cpp_headers/histogram_data.o 00:04:18.031 CXX test/cpp_headers/hexlify.o 00:04:18.031 CXX test/cpp_headers/idxd.o 00:04:18.031 CXX test/cpp_headers/idxd_spec.o 00:04:18.031 CXX test/cpp_headers/ioat_spec.o 00:04:18.031 CXX test/cpp_headers/ioat.o 00:04:18.031 CXX test/cpp_headers/iscsi_spec.o 00:04:18.031 CXX test/cpp_headers/init.o 00:04:18.031 CXX test/cpp_headers/json.o 00:04:18.031 CXX test/cpp_headers/jsonrpc.o 00:04:18.031 CXX test/cpp_headers/keyring.o 00:04:18.031 CXX test/cpp_headers/keyring_module.o 00:04:18.031 CXX test/cpp_headers/likely.o 00:04:18.031 CXX test/cpp_headers/log.o 00:04:18.031 CXX test/cpp_headers/mmio.o 00:04:18.031 CXX test/cpp_headers/lvol.o 00:04:18.031 CXX test/cpp_headers/memory.o 00:04:18.031 CXX test/cpp_headers/md5.o 00:04:18.031 CXX test/cpp_headers/nbd.o 00:04:18.031 CXX test/cpp_headers/net.o 00:04:18.031 CXX test/cpp_headers/nvme.o 00:04:18.031 CXX test/cpp_headers/notify.o 00:04:18.031 CXX test/cpp_headers/nvme_intel.o 00:04:18.031 CXX test/cpp_headers/nvme_ocssd.o 00:04:18.031 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:18.031 CXX test/cpp_headers/nvme_zns.o 00:04:18.031 CXX test/cpp_headers/nvme_spec.o 00:04:18.031 CXX test/cpp_headers/nvmf_cmd.o 00:04:18.031 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:18.031 CXX test/cpp_headers/nvmf_transport.o 00:04:18.031 CXX test/cpp_headers/nvmf_spec.o 00:04:18.031 CXX test/cpp_headers/nvmf.o 00:04:18.031 CXX test/cpp_headers/opal.o 00:04:18.031 CXX test/cpp_headers/pci_ids.o 00:04:18.031 CXX test/cpp_headers/opal_spec.o 00:04:18.031 CXX test/cpp_headers/pipe.o 00:04:18.031 CXX test/cpp_headers/queue.o 00:04:18.031 CXX test/cpp_headers/reduce.o 00:04:18.031 CXX test/cpp_headers/rpc.o 00:04:18.031 CXX test/cpp_headers/scheduler.o 00:04:18.031 CXX test/cpp_headers/scsi.o 00:04:18.031 CXX test/cpp_headers/scsi_spec.o 00:04:18.031 CXX test/cpp_headers/sock.o 00:04:18.031 CXX test/cpp_headers/stdinc.o 00:04:18.031 CXX test/cpp_headers/string.o 00:04:18.031 CXX test/cpp_headers/thread.o 00:04:18.031 CXX test/cpp_headers/trace.o 00:04:18.031 CXX test/cpp_headers/trace_parser.o 00:04:18.031 CXX test/cpp_headers/tree.o 00:04:18.031 CXX test/cpp_headers/ublk.o 00:04:18.031 CXX test/cpp_headers/util.o 00:04:18.031 CC test/env/vtophys/vtophys.o 00:04:18.031 CC test/app/stub/stub.o 00:04:18.032 CC test/app/histogram_perf/histogram_perf.o 00:04:18.032 CXX test/cpp_headers/uuid.o 00:04:18.032 CC app/fio/nvme/fio_plugin.o 00:04:18.327 CC examples/util/zipf/zipf.o 00:04:18.327 CC test/thread/poller_perf/poller_perf.o 00:04:18.327 CC test/env/memory/memory_ut.o 00:04:18.327 CC test/app/jsoncat/jsoncat.o 00:04:18.327 CC examples/ioat/perf/perf.o 00:04:18.327 CXX test/cpp_headers/version.o 00:04:18.327 CC test/env/pci/pci_ut.o 00:04:18.327 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:18.327 CC examples/ioat/verify/verify.o 00:04:18.328 CC test/dma/test_dma/test_dma.o 00:04:18.328 CC test/app/bdev_svc/bdev_svc.o 00:04:18.328 LINK spdk_lspci 00:04:18.328 CC app/fio/bdev/fio_plugin.o 00:04:18.603 LINK spdk_nvme_discover 00:04:18.603 LINK rpc_client_test 00:04:18.603 LINK nvmf_tgt 00:04:18.603 LINK interrupt_tgt 00:04:18.865 LINK iscsi_tgt 00:04:18.865 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:18.865 CC test/env/mem_callbacks/mem_callbacks.o 00:04:18.865 CXX test/cpp_headers/vfio_user_pci.o 00:04:18.865 CXX test/cpp_headers/vfio_user_spec.o 00:04:18.865 CXX test/cpp_headers/vhost.o 00:04:18.865 CXX test/cpp_headers/vmd.o 00:04:18.865 CXX test/cpp_headers/xor.o 00:04:18.865 CXX test/cpp_headers/zipf.o 00:04:18.865 LINK spdk_trace_record 00:04:18.865 LINK poller_perf 00:04:18.865 LINK vtophys 00:04:18.865 LINK histogram_perf 00:04:18.865 LINK jsoncat 00:04:18.865 LINK spdk_tgt 00:04:18.865 LINK stub 00:04:18.865 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:18.865 LINK zipf 00:04:18.865 LINK env_dpdk_post_init 00:04:18.865 LINK bdev_svc 00:04:18.865 LINK ioat_perf 00:04:18.865 LINK verify 00:04:18.865 LINK spdk_dd 00:04:18.865 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:18.865 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:19.125 LINK spdk_trace 00:04:19.125 LINK mem_callbacks 00:04:19.125 LINK pci_ut 00:04:19.125 LINK test_dma 00:04:19.125 LINK nvme_fuzz 00:04:19.384 LINK vhost_fuzz 00:04:19.384 LINK spdk_nvme_perf 00:04:19.384 LINK spdk_bdev 00:04:19.384 LINK spdk_nvme_identify 00:04:19.384 LINK memory_ut 00:04:19.384 LINK spdk_nvme 00:04:19.384 CC examples/idxd/perf/perf.o 00:04:19.384 CC test/event/reactor/reactor.o 00:04:19.384 LINK spdk_top 00:04:19.384 CC test/event/reactor_perf/reactor_perf.o 00:04:19.384 CC test/event/event_perf/event_perf.o 00:04:19.384 CC examples/vmd/led/led.o 00:04:19.384 CC examples/sock/hello_world/hello_sock.o 00:04:19.384 CC app/vhost/vhost.o 00:04:19.384 CC examples/vmd/lsvmd/lsvmd.o 00:04:19.384 CC test/event/app_repeat/app_repeat.o 00:04:19.384 CC examples/thread/thread/thread_ex.o 00:04:19.384 CC test/event/scheduler/scheduler.o 00:04:19.384 LINK reactor 00:04:19.644 LINK led 00:04:19.644 LINK lsvmd 00:04:19.644 LINK reactor_perf 00:04:19.644 LINK event_perf 00:04:19.644 LINK vhost 00:04:19.644 LINK app_repeat 00:04:19.644 LINK hello_sock 00:04:19.644 LINK scheduler 00:04:19.644 LINK thread 00:04:19.644 LINK idxd_perf 00:04:19.644 CC test/nvme/aer/aer.o 00:04:19.644 CC test/nvme/fused_ordering/fused_ordering.o 00:04:19.644 CC test/nvme/boot_partition/boot_partition.o 00:04:19.644 CC test/nvme/e2edp/nvme_dp.o 00:04:19.644 CC test/nvme/simple_copy/simple_copy.o 00:04:19.644 CC test/nvme/err_injection/err_injection.o 00:04:19.644 CC test/nvme/reset/reset.o 00:04:19.644 CC test/nvme/overhead/overhead.o 00:04:19.644 CC test/nvme/startup/startup.o 00:04:19.644 CC test/nvme/fdp/fdp.o 00:04:19.644 CC test/nvme/cuse/cuse.o 00:04:19.644 CC test/nvme/compliance/nvme_compliance.o 00:04:19.644 CC test/nvme/connect_stress/connect_stress.o 00:04:19.644 CC test/nvme/reserve/reserve.o 00:04:19.644 CC test/nvme/sgl/sgl.o 00:04:19.644 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:19.644 CC test/accel/dif/dif.o 00:04:19.644 CC test/blobfs/mkfs/mkfs.o 00:04:19.904 LINK boot_partition 00:04:19.904 LINK fused_ordering 00:04:19.904 LINK connect_stress 00:04:19.904 LINK err_injection 00:04:19.904 LINK startup 00:04:19.904 CC test/lvol/esnap/esnap.o 00:04:19.904 LINK doorbell_aers 00:04:19.904 LINK simple_copy 00:04:19.904 LINK reserve 00:04:19.904 LINK aer 00:04:19.904 LINK nvme_dp 00:04:19.904 LINK reset 00:04:19.904 LINK sgl 00:04:19.904 LINK overhead 00:04:19.904 LINK mkfs 00:04:19.904 LINK fdp 00:04:19.904 LINK nvme_compliance 00:04:20.164 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:20.164 CC examples/nvme/hello_world/hello_world.o 00:04:20.164 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:20.164 CC examples/nvme/abort/abort.o 00:04:20.164 CC examples/nvme/hotplug/hotplug.o 00:04:20.164 CC examples/nvme/reconnect/reconnect.o 00:04:20.164 LINK iscsi_fuzz 00:04:20.164 CC examples/nvme/arbitration/arbitration.o 00:04:20.164 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:20.164 CC examples/accel/perf/accel_perf.o 00:04:20.164 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:20.164 CC examples/blob/cli/blobcli.o 00:04:20.164 CC examples/blob/hello_world/hello_blob.o 00:04:20.424 LINK dif 00:04:20.424 LINK pmr_persistence 00:04:20.424 LINK cmb_copy 00:04:20.424 LINK hello_world 00:04:20.424 LINK hotplug 00:04:20.424 LINK arbitration 00:04:20.424 LINK abort 00:04:20.424 LINK hello_blob 00:04:20.424 LINK hello_fsdev 00:04:20.424 LINK nvme_manage 00:04:20.424 LINK reconnect 00:04:20.684 LINK accel_perf 00:04:20.684 LINK blobcli 00:04:20.684 LINK cuse 00:04:20.946 CC test/bdev/bdevio/bdevio.o 00:04:21.205 CC examples/bdev/hello_world/hello_bdev.o 00:04:21.205 CC examples/bdev/bdevperf/bdevperf.o 00:04:21.205 LINK bdevio 00:04:21.465 LINK hello_bdev 00:04:21.725 LINK bdevperf 00:04:22.295 CC examples/nvmf/nvmf/nvmf.o 00:04:22.556 LINK nvmf 00:04:23.496 LINK esnap 00:04:23.757 00:04:23.757 real 0m55.643s 00:04:23.757 user 6m14.137s 00:04:23.757 sys 3m8.136s 00:04:23.757 18:58:57 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:23.757 18:58:57 make -- common/autotest_common.sh@10 -- $ set +x 00:04:23.757 ************************************ 00:04:23.757 END TEST make 00:04:23.757 ************************************ 00:04:23.757 18:58:58 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:23.757 18:58:58 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:23.757 18:58:58 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:23.757 18:58:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.757 18:58:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:23.757 18:58:58 -- pm/common@44 -- $ pid=7497 00:04:23.757 18:58:58 -- pm/common@50 -- $ kill -TERM 7497 00:04:23.757 18:58:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.757 18:58:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:23.757 18:58:58 -- pm/common@44 -- $ pid=7499 00:04:23.757 18:58:58 -- pm/common@50 -- $ kill -TERM 7499 00:04:23.757 18:58:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.757 18:58:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:23.757 18:58:58 -- pm/common@44 -- $ pid=7501 00:04:23.757 18:58:58 -- pm/common@50 -- $ kill -TERM 7501 00:04:23.757 18:58:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.757 18:58:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:23.757 18:58:58 -- pm/common@44 -- $ pid=7523 00:04:23.757 18:58:58 -- pm/common@50 -- $ sudo -E kill -TERM 7523 00:04:23.757 18:58:58 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:23.757 18:58:58 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:04:24.018 18:58:58 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:24.018 18:58:58 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:24.018 18:58:58 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:24.018 18:58:58 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:24.018 18:58:58 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.018 18:58:58 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.018 18:58:58 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.018 18:58:58 -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.018 18:58:58 -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.018 18:58:58 -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.018 18:58:58 -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.018 18:58:58 -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.018 18:58:58 -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.018 18:58:58 -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.018 18:58:58 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.018 18:58:58 -- scripts/common.sh@344 -- # case "$op" in 00:04:24.018 18:58:58 -- scripts/common.sh@345 -- # : 1 00:04:24.018 18:58:58 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.018 18:58:58 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.018 18:58:58 -- scripts/common.sh@365 -- # decimal 1 00:04:24.018 18:58:58 -- scripts/common.sh@353 -- # local d=1 00:04:24.018 18:58:58 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.018 18:58:58 -- scripts/common.sh@355 -- # echo 1 00:04:24.018 18:58:58 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.018 18:58:58 -- scripts/common.sh@366 -- # decimal 2 00:04:24.018 18:58:58 -- scripts/common.sh@353 -- # local d=2 00:04:24.018 18:58:58 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.018 18:58:58 -- scripts/common.sh@355 -- # echo 2 00:04:24.018 18:58:58 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.018 18:58:58 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.018 18:58:58 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.018 18:58:58 -- scripts/common.sh@368 -- # return 0 00:04:24.018 18:58:58 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.018 18:58:58 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:24.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.018 --rc genhtml_branch_coverage=1 00:04:24.018 --rc genhtml_function_coverage=1 00:04:24.018 --rc genhtml_legend=1 00:04:24.018 --rc geninfo_all_blocks=1 00:04:24.018 --rc geninfo_unexecuted_blocks=1 00:04:24.018 00:04:24.018 ' 00:04:24.018 18:58:58 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:24.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.018 --rc genhtml_branch_coverage=1 00:04:24.018 --rc genhtml_function_coverage=1 00:04:24.018 --rc genhtml_legend=1 00:04:24.018 --rc geninfo_all_blocks=1 00:04:24.018 --rc geninfo_unexecuted_blocks=1 00:04:24.018 00:04:24.018 ' 00:04:24.018 18:58:58 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:24.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.018 --rc genhtml_branch_coverage=1 00:04:24.018 --rc genhtml_function_coverage=1 00:04:24.018 --rc genhtml_legend=1 00:04:24.018 --rc geninfo_all_blocks=1 00:04:24.018 --rc geninfo_unexecuted_blocks=1 00:04:24.018 00:04:24.018 ' 00:04:24.018 18:58:58 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:24.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.018 --rc genhtml_branch_coverage=1 00:04:24.018 --rc genhtml_function_coverage=1 00:04:24.018 --rc genhtml_legend=1 00:04:24.018 --rc geninfo_all_blocks=1 00:04:24.018 --rc geninfo_unexecuted_blocks=1 00:04:24.018 00:04:24.018 ' 00:04:24.018 18:58:58 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:24.018 18:58:58 -- nvmf/common.sh@7 -- # uname -s 00:04:24.018 18:58:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:24.018 18:58:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:24.018 18:58:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:24.018 18:58:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:24.018 18:58:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:24.018 18:58:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:24.018 18:58:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:24.018 18:58:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:24.018 18:58:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:24.018 18:58:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:24.018 18:58:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:04:24.018 18:58:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:04:24.018 18:58:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:24.018 18:58:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:24.018 18:58:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:24.018 18:58:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:24.018 18:58:58 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:24.018 18:58:58 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:24.019 18:58:58 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:24.019 18:58:58 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.019 18:58:58 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.019 18:58:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.019 18:58:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.019 18:58:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.019 18:58:58 -- paths/export.sh@5 -- # export PATH 00:04:24.019 18:58:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.019 18:58:58 -- nvmf/common.sh@51 -- # : 0 00:04:24.019 18:58:58 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:24.019 18:58:58 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:24.019 18:58:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:24.019 18:58:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:24.019 18:58:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:24.019 18:58:58 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:24.019 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:24.019 18:58:58 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:24.019 18:58:58 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:24.019 18:58:58 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:24.019 18:58:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:24.019 18:58:58 -- spdk/autotest.sh@32 -- # uname -s 00:04:24.019 18:58:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:24.019 18:58:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:24.019 18:58:58 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:04:24.019 18:58:58 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:24.019 18:58:58 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:04:24.019 18:58:58 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:24.019 18:58:58 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:24.019 18:58:58 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:24.019 18:58:58 -- spdk/autotest.sh@48 -- # udevadm_pid=88630 00:04:24.019 18:58:58 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:24.019 18:58:58 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:24.019 18:58:58 -- pm/common@17 -- # local monitor 00:04:24.019 18:58:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.279 18:58:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.279 18:58:58 -- pm/common@21 -- # date +%s 00:04:24.279 18:58:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.279 18:58:58 -- pm/common@21 -- # date +%s 00:04:24.279 18:58:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.279 18:58:58 -- pm/common@21 -- # date +%s 00:04:24.279 18:58:58 -- pm/common@25 -- # sleep 1 00:04:24.279 18:58:58 -- pm/common@21 -- # date +%s 00:04:24.279 18:58:58 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734112738 00:04:24.279 18:58:58 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734112738 00:04:24.279 18:58:58 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734112738 00:04:24.279 18:58:58 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734112738 00:04:24.279 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734112738_collect-cpu-temp.pm.log 00:04:24.279 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734112738_collect-vmstat.pm.log 00:04:24.279 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734112738_collect-cpu-load.pm.log 00:04:24.279 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734112738_collect-bmc-pm.bmc.pm.log 00:04:25.221 18:58:59 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:25.221 18:58:59 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:25.221 18:58:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:25.221 18:58:59 -- common/autotest_common.sh@10 -- # set +x 00:04:25.221 18:58:59 -- spdk/autotest.sh@59 -- # create_test_list 00:04:25.221 18:58:59 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:25.221 18:58:59 -- common/autotest_common.sh@10 -- # set +x 00:04:25.221 18:58:59 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:04:25.221 18:58:59 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:25.221 18:58:59 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:25.221 18:58:59 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:04:25.221 18:58:59 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:25.221 18:58:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:25.221 18:58:59 -- common/autotest_common.sh@1457 -- # uname 00:04:25.221 18:58:59 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:25.221 18:58:59 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:25.221 18:58:59 -- common/autotest_common.sh@1477 -- # uname 00:04:25.221 18:58:59 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:25.221 18:58:59 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:25.221 18:58:59 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:25.221 lcov: LCOV version 1.15 00:04:25.221 18:58:59 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:04:47.183 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:47.183 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:50.481 18:59:24 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:50.481 18:59:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:50.481 18:59:24 -- common/autotest_common.sh@10 -- # set +x 00:04:50.481 18:59:24 -- spdk/autotest.sh@78 -- # rm -f 00:04:50.481 18:59:24 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.778 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:53.778 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:53.778 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:53.778 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:53.778 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:53.778 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:53.778 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:53.778 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:53.778 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:53.778 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:54.039 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:54.039 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:54.039 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:54.039 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:54.039 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:54.039 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:54.039 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:04:54.039 18:59:28 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:54.039 18:59:28 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:54.039 18:59:28 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:54.039 18:59:28 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:54.039 18:59:28 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:54.039 18:59:28 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:54.039 18:59:28 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:54.039 18:59:28 -- common/autotest_common.sh@1669 -- # bdf=0000:d8:00.0 00:04:54.039 18:59:28 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:54.039 18:59:28 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:54.039 18:59:28 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:54.039 18:59:28 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:54.039 18:59:28 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:54.039 18:59:28 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:54.039 18:59:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:54.039 18:59:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:54.039 18:59:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:54.039 18:59:28 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:54.039 18:59:28 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:54.300 No valid GPT data, bailing 00:04:54.300 18:59:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:54.300 18:59:28 -- scripts/common.sh@394 -- # pt= 00:04:54.300 18:59:28 -- scripts/common.sh@395 -- # return 1 00:04:54.300 18:59:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:54.300 1+0 records in 00:04:54.300 1+0 records out 00:04:54.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00196135 s, 535 MB/s 00:04:54.300 18:59:28 -- spdk/autotest.sh@105 -- # sync 00:04:54.300 18:59:28 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:54.300 18:59:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:54.300 18:59:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:02.446 18:59:35 -- spdk/autotest.sh@111 -- # uname -s 00:05:02.446 18:59:35 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:02.446 18:59:35 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:02.446 18:59:35 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:05:04.999 Hugepages 00:05:04.999 node hugesize free / total 00:05:04.999 node0 1048576kB 0 / 0 00:05:04.999 node0 2048kB 0 / 0 00:05:04.999 node1 1048576kB 0 / 0 00:05:04.999 node1 2048kB 0 / 0 00:05:04.999 00:05:04.999 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:04.999 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:04.999 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:04.999 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:04.999 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:04.999 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:04.999 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:04.999 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:04.999 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:04.999 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:04.999 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:04.999 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:04.999 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:04.999 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:05.260 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:05.260 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:05.260 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:05.260 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:05.260 18:59:39 -- spdk/autotest.sh@117 -- # uname -s 00:05:05.260 18:59:39 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:05.260 18:59:39 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:05.260 18:59:39 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:09.466 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:09.466 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:09.466 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:09.466 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:09.466 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:09.466 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:09.466 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:09.466 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:09.466 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:09.466 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:09.466 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:09.466 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:09.466 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:09.466 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:09.466 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:09.466 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:10.855 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:10.855 18:59:45 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:11.796 18:59:46 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:11.796 18:59:46 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:11.796 18:59:46 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:11.796 18:59:46 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:11.796 18:59:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:11.796 18:59:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:11.796 18:59:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:11.796 18:59:46 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:11.796 18:59:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:11.796 18:59:46 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:11.796 18:59:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:05:11.796 18:59:46 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:16.001 Waiting for block devices as requested 00:05:16.001 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:16.001 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:16.001 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:16.001 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:16.001 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:16.001 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:16.001 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:16.001 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:16.001 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:16.001 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:16.261 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:16.261 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:16.261 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:16.523 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:16.523 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:16.523 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:16.783 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:16.783 18:59:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:17.043 18:59:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:17.043 18:59:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:17.043 18:59:51 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:05:17.043 18:59:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:17.043 18:59:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:17.043 18:59:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:17.043 18:59:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:17.043 18:59:51 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:17.043 18:59:51 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:17.043 18:59:51 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:17.043 18:59:51 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:17.043 18:59:51 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:17.043 18:59:51 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:05:17.043 18:59:51 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:17.043 18:59:51 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:17.043 18:59:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:17.044 18:59:51 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:17.044 18:59:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:17.044 18:59:51 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:17.044 18:59:51 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:17.044 18:59:51 -- common/autotest_common.sh@1543 -- # continue 00:05:17.044 18:59:51 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:17.044 18:59:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:17.044 18:59:51 -- common/autotest_common.sh@10 -- # set +x 00:05:17.044 18:59:51 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:17.044 18:59:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.044 18:59:51 -- common/autotest_common.sh@10 -- # set +x 00:05:17.044 18:59:51 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:20.342 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:20.342 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:20.342 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:20.342 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:20.342 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:20.602 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:20.602 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:20.602 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:20.602 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:20.602 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:20.602 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:20.602 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:20.602 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:20.602 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:20.602 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:20.602 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:22.515 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:22.515 18:59:56 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:22.515 18:59:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:22.515 18:59:56 -- common/autotest_common.sh@10 -- # set +x 00:05:22.515 18:59:56 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:22.515 18:59:56 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:22.515 18:59:56 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:22.515 18:59:56 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:22.515 18:59:56 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:22.515 18:59:56 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:22.515 18:59:56 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:22.515 18:59:56 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:22.515 18:59:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:22.515 18:59:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:22.515 18:59:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:22.515 18:59:56 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:22.515 18:59:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:22.775 18:59:56 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:22.775 18:59:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:05:22.776 18:59:56 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:22.776 18:59:56 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:22.776 18:59:56 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:22.776 18:59:56 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:22.776 18:59:56 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:22.776 18:59:56 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:22.776 18:59:56 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:d8:00.0 00:05:22.776 18:59:56 -- common/autotest_common.sh@1579 -- # [[ -z 0000:d8:00.0 ]] 00:05:22.776 18:59:56 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=104688 00:05:22.776 18:59:56 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.776 18:59:56 -- common/autotest_common.sh@1585 -- # waitforlisten 104688 00:05:22.776 18:59:56 -- common/autotest_common.sh@835 -- # '[' -z 104688 ']' 00:05:22.776 18:59:56 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.776 18:59:56 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.776 18:59:56 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.776 18:59:56 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.776 18:59:56 -- common/autotest_common.sh@10 -- # set +x 00:05:22.776 [2024-12-13 18:59:57.049190] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:22.776 [2024-12-13 18:59:57.049246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104688 ] 00:05:22.776 [2024-12-13 18:59:57.142665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.036 [2024-12-13 18:59:57.165049] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.604 18:59:57 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.604 18:59:57 -- common/autotest_common.sh@868 -- # return 0 00:05:23.604 18:59:57 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:23.604 18:59:57 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:23.604 18:59:57 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:26.903 nvme0n1 00:05:26.903 19:00:00 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:26.903 [2024-12-13 19:00:01.053529] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:26.903 request: 00:05:26.903 { 00:05:26.903 "nvme_ctrlr_name": "nvme0", 00:05:26.903 "password": "test", 00:05:26.903 "method": "bdev_nvme_opal_revert", 00:05:26.903 "req_id": 1 00:05:26.903 } 00:05:26.903 Got JSON-RPC error response 00:05:26.903 response: 00:05:26.903 { 00:05:26.903 "code": -32602, 00:05:26.903 "message": "Invalid parameters" 00:05:26.903 } 00:05:26.903 19:00:01 -- common/autotest_common.sh@1591 -- # true 00:05:26.903 19:00:01 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:26.903 19:00:01 -- common/autotest_common.sh@1595 -- # killprocess 104688 00:05:26.903 19:00:01 -- common/autotest_common.sh@954 -- # '[' -z 104688 ']' 00:05:26.903 19:00:01 -- common/autotest_common.sh@958 -- # kill -0 104688 00:05:26.903 19:00:01 -- common/autotest_common.sh@959 -- # uname 00:05:26.903 19:00:01 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.903 19:00:01 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104688 00:05:26.903 19:00:01 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.903 19:00:01 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.903 19:00:01 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104688' 00:05:26.903 killing process with pid 104688 00:05:26.903 19:00:01 -- common/autotest_common.sh@973 -- # kill 104688 00:05:26.903 19:00:01 -- common/autotest_common.sh@978 -- # wait 104688 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.903 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:26.904 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:29.443 19:00:03 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:29.443 19:00:03 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:29.443 19:00:03 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:29.443 19:00:03 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:29.443 19:00:03 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:29.443 19:00:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:29.443 19:00:03 -- common/autotest_common.sh@10 -- # set +x 00:05:29.443 19:00:03 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:29.443 19:00:03 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:29.443 19:00:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.443 19:00:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.443 19:00:03 -- common/autotest_common.sh@10 -- # set +x 00:05:29.443 ************************************ 00:05:29.443 START TEST env 00:05:29.443 ************************************ 00:05:29.443 19:00:03 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:29.704 * Looking for test storage... 00:05:29.704 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:05:29.704 19:00:03 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:29.704 19:00:03 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:29.704 19:00:03 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:29.704 19:00:03 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:29.704 19:00:03 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.704 19:00:03 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.704 19:00:03 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.704 19:00:03 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.704 19:00:03 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.704 19:00:03 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.704 19:00:03 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.704 19:00:03 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.704 19:00:03 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.704 19:00:03 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.704 19:00:03 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.704 19:00:03 env -- scripts/common.sh@344 -- # case "$op" in 00:05:29.704 19:00:03 env -- scripts/common.sh@345 -- # : 1 00:05:29.704 19:00:03 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.704 19:00:03 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.704 19:00:03 env -- scripts/common.sh@365 -- # decimal 1 00:05:29.704 19:00:03 env -- scripts/common.sh@353 -- # local d=1 00:05:29.704 19:00:03 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.704 19:00:03 env -- scripts/common.sh@355 -- # echo 1 00:05:29.704 19:00:03 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.704 19:00:03 env -- scripts/common.sh@366 -- # decimal 2 00:05:29.704 19:00:03 env -- scripts/common.sh@353 -- # local d=2 00:05:29.704 19:00:03 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.704 19:00:03 env -- scripts/common.sh@355 -- # echo 2 00:05:29.704 19:00:03 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.704 19:00:03 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.704 19:00:03 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.704 19:00:03 env -- scripts/common.sh@368 -- # return 0 00:05:29.704 19:00:03 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.704 19:00:03 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:29.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.704 --rc genhtml_branch_coverage=1 00:05:29.704 --rc genhtml_function_coverage=1 00:05:29.704 --rc genhtml_legend=1 00:05:29.704 --rc geninfo_all_blocks=1 00:05:29.704 --rc geninfo_unexecuted_blocks=1 00:05:29.704 00:05:29.704 ' 00:05:29.704 19:00:03 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:29.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.704 --rc genhtml_branch_coverage=1 00:05:29.704 --rc genhtml_function_coverage=1 00:05:29.704 --rc genhtml_legend=1 00:05:29.704 --rc geninfo_all_blocks=1 00:05:29.704 --rc geninfo_unexecuted_blocks=1 00:05:29.704 00:05:29.704 ' 00:05:29.704 19:00:03 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:29.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.704 --rc genhtml_branch_coverage=1 00:05:29.704 --rc genhtml_function_coverage=1 00:05:29.704 --rc genhtml_legend=1 00:05:29.704 --rc geninfo_all_blocks=1 00:05:29.704 --rc geninfo_unexecuted_blocks=1 00:05:29.704 00:05:29.704 ' 00:05:29.704 19:00:03 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:29.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.704 --rc genhtml_branch_coverage=1 00:05:29.704 --rc genhtml_function_coverage=1 00:05:29.704 --rc genhtml_legend=1 00:05:29.704 --rc geninfo_all_blocks=1 00:05:29.704 --rc geninfo_unexecuted_blocks=1 00:05:29.704 00:05:29.704 ' 00:05:29.704 19:00:03 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:29.704 19:00:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.704 19:00:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.704 19:00:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.704 ************************************ 00:05:29.704 START TEST env_memory 00:05:29.704 ************************************ 00:05:29.704 19:00:03 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:29.704 00:05:29.704 00:05:29.704 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.704 http://cunit.sourceforge.net/ 00:05:29.704 00:05:29.704 00:05:29.704 Suite: memory 00:05:29.704 Test: alloc and free memory map ...[2024-12-13 19:00:04.017175] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:29.704 passed 00:05:29.704 Test: mem map translation ...[2024-12-13 19:00:04.036477] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:29.705 [2024-12-13 19:00:04.036493] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:29.705 [2024-12-13 19:00:04.036530] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:29.705 [2024-12-13 19:00:04.036539] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:29.705 passed 00:05:29.705 Test: mem map registration ...[2024-12-13 19:00:04.073487] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:29.705 [2024-12-13 19:00:04.073503] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:29.966 passed 00:05:29.966 Test: mem map adjacent registrations ...passed 00:05:29.966 00:05:29.966 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.966 suites 1 1 n/a 0 0 00:05:29.966 tests 4 4 4 0 0 00:05:29.966 asserts 152 152 152 0 n/a 00:05:29.966 00:05:29.966 Elapsed time = 0.125 seconds 00:05:29.966 00:05:29.966 real 0m0.135s 00:05:29.966 user 0m0.127s 00:05:29.966 sys 0m0.007s 00:05:29.966 19:00:04 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.966 19:00:04 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:29.966 ************************************ 00:05:29.966 END TEST env_memory 00:05:29.966 ************************************ 00:05:29.966 19:00:04 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:29.966 19:00:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.966 19:00:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.966 19:00:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.966 ************************************ 00:05:29.966 START TEST env_vtophys 00:05:29.966 ************************************ 00:05:29.966 19:00:04 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:29.966 EAL: lib.eal log level changed from notice to debug 00:05:29.966 EAL: Detected lcore 0 as core 0 on socket 0 00:05:29.966 EAL: Detected lcore 1 as core 1 on socket 0 00:05:29.966 EAL: Detected lcore 2 as core 2 on socket 0 00:05:29.966 EAL: Detected lcore 3 as core 3 on socket 0 00:05:29.966 EAL: Detected lcore 4 as core 4 on socket 0 00:05:29.966 EAL: Detected lcore 5 as core 5 on socket 0 00:05:29.966 EAL: Detected lcore 6 as core 6 on socket 0 00:05:29.966 EAL: Detected lcore 7 as core 8 on socket 0 00:05:29.966 EAL: Detected lcore 8 as core 9 on socket 0 00:05:29.966 EAL: Detected lcore 9 as core 10 on socket 0 00:05:29.966 EAL: Detected lcore 10 as core 11 on socket 0 00:05:29.966 EAL: Detected lcore 11 as core 12 on socket 0 00:05:29.966 EAL: Detected lcore 12 as core 13 on socket 0 00:05:29.966 EAL: Detected lcore 13 as core 14 on socket 0 00:05:29.966 EAL: Detected lcore 14 as core 16 on socket 0 00:05:29.966 EAL: Detected lcore 15 as core 17 on socket 0 00:05:29.966 EAL: Detected lcore 16 as core 18 on socket 0 00:05:29.966 EAL: Detected lcore 17 as core 19 on socket 0 00:05:29.966 EAL: Detected lcore 18 as core 20 on socket 0 00:05:29.966 EAL: Detected lcore 19 as core 21 on socket 0 00:05:29.966 EAL: Detected lcore 20 as core 22 on socket 0 00:05:29.966 EAL: Detected lcore 21 as core 24 on socket 0 00:05:29.966 EAL: Detected lcore 22 as core 25 on socket 0 00:05:29.966 EAL: Detected lcore 23 as core 26 on socket 0 00:05:29.966 EAL: Detected lcore 24 as core 27 on socket 0 00:05:29.966 EAL: Detected lcore 25 as core 28 on socket 0 00:05:29.966 EAL: Detected lcore 26 as core 29 on socket 0 00:05:29.966 EAL: Detected lcore 27 as core 30 on socket 0 00:05:29.966 EAL: Detected lcore 28 as core 0 on socket 1 00:05:29.966 EAL: Detected lcore 29 as core 1 on socket 1 00:05:29.966 EAL: Detected lcore 30 as core 2 on socket 1 00:05:29.966 EAL: Detected lcore 31 as core 3 on socket 1 00:05:29.966 EAL: Detected lcore 32 as core 4 on socket 1 00:05:29.966 EAL: Detected lcore 33 as core 5 on socket 1 00:05:29.966 EAL: Detected lcore 34 as core 6 on socket 1 00:05:29.966 EAL: Detected lcore 35 as core 8 on socket 1 00:05:29.966 EAL: Detected lcore 36 as core 9 on socket 1 00:05:29.966 EAL: Detected lcore 37 as core 10 on socket 1 00:05:29.966 EAL: Detected lcore 38 as core 11 on socket 1 00:05:29.966 EAL: Detected lcore 39 as core 12 on socket 1 00:05:29.966 EAL: Detected lcore 40 as core 13 on socket 1 00:05:29.966 EAL: Detected lcore 41 as core 14 on socket 1 00:05:29.966 EAL: Detected lcore 42 as core 16 on socket 1 00:05:29.966 EAL: Detected lcore 43 as core 17 on socket 1 00:05:29.966 EAL: Detected lcore 44 as core 18 on socket 1 00:05:29.966 EAL: Detected lcore 45 as core 19 on socket 1 00:05:29.966 EAL: Detected lcore 46 as core 20 on socket 1 00:05:29.966 EAL: Detected lcore 47 as core 21 on socket 1 00:05:29.966 EAL: Detected lcore 48 as core 22 on socket 1 00:05:29.966 EAL: Detected lcore 49 as core 24 on socket 1 00:05:29.966 EAL: Detected lcore 50 as core 25 on socket 1 00:05:29.966 EAL: Detected lcore 51 as core 26 on socket 1 00:05:29.966 EAL: Detected lcore 52 as core 27 on socket 1 00:05:29.966 EAL: Detected lcore 53 as core 28 on socket 1 00:05:29.966 EAL: Detected lcore 54 as core 29 on socket 1 00:05:29.966 EAL: Detected lcore 55 as core 30 on socket 1 00:05:29.966 EAL: Detected lcore 56 as core 0 on socket 0 00:05:29.966 EAL: Detected lcore 57 as core 1 on socket 0 00:05:29.966 EAL: Detected lcore 58 as core 2 on socket 0 00:05:29.966 EAL: Detected lcore 59 as core 3 on socket 0 00:05:29.966 EAL: Detected lcore 60 as core 4 on socket 0 00:05:29.966 EAL: Detected lcore 61 as core 5 on socket 0 00:05:29.966 EAL: Detected lcore 62 as core 6 on socket 0 00:05:29.966 EAL: Detected lcore 63 as core 8 on socket 0 00:05:29.966 EAL: Detected lcore 64 as core 9 on socket 0 00:05:29.966 EAL: Detected lcore 65 as core 10 on socket 0 00:05:29.966 EAL: Detected lcore 66 as core 11 on socket 0 00:05:29.966 EAL: Detected lcore 67 as core 12 on socket 0 00:05:29.966 EAL: Detected lcore 68 as core 13 on socket 0 00:05:29.966 EAL: Detected lcore 69 as core 14 on socket 0 00:05:29.966 EAL: Detected lcore 70 as core 16 on socket 0 00:05:29.966 EAL: Detected lcore 71 as core 17 on socket 0 00:05:29.966 EAL: Detected lcore 72 as core 18 on socket 0 00:05:29.966 EAL: Detected lcore 73 as core 19 on socket 0 00:05:29.966 EAL: Detected lcore 74 as core 20 on socket 0 00:05:29.966 EAL: Detected lcore 75 as core 21 on socket 0 00:05:29.966 EAL: Detected lcore 76 as core 22 on socket 0 00:05:29.966 EAL: Detected lcore 77 as core 24 on socket 0 00:05:29.966 EAL: Detected lcore 78 as core 25 on socket 0 00:05:29.966 EAL: Detected lcore 79 as core 26 on socket 0 00:05:29.966 EAL: Detected lcore 80 as core 27 on socket 0 00:05:29.966 EAL: Detected lcore 81 as core 28 on socket 0 00:05:29.966 EAL: Detected lcore 82 as core 29 on socket 0 00:05:29.966 EAL: Detected lcore 83 as core 30 on socket 0 00:05:29.966 EAL: Detected lcore 84 as core 0 on socket 1 00:05:29.966 EAL: Detected lcore 85 as core 1 on socket 1 00:05:29.966 EAL: Detected lcore 86 as core 2 on socket 1 00:05:29.966 EAL: Detected lcore 87 as core 3 on socket 1 00:05:29.966 EAL: Detected lcore 88 as core 4 on socket 1 00:05:29.966 EAL: Detected lcore 89 as core 5 on socket 1 00:05:29.966 EAL: Detected lcore 90 as core 6 on socket 1 00:05:29.966 EAL: Detected lcore 91 as core 8 on socket 1 00:05:29.966 EAL: Detected lcore 92 as core 9 on socket 1 00:05:29.966 EAL: Detected lcore 93 as core 10 on socket 1 00:05:29.966 EAL: Detected lcore 94 as core 11 on socket 1 00:05:29.966 EAL: Detected lcore 95 as core 12 on socket 1 00:05:29.966 EAL: Detected lcore 96 as core 13 on socket 1 00:05:29.966 EAL: Detected lcore 97 as core 14 on socket 1 00:05:29.966 EAL: Detected lcore 98 as core 16 on socket 1 00:05:29.966 EAL: Detected lcore 99 as core 17 on socket 1 00:05:29.966 EAL: Detected lcore 100 as core 18 on socket 1 00:05:29.966 EAL: Detected lcore 101 as core 19 on socket 1 00:05:29.966 EAL: Detected lcore 102 as core 20 on socket 1 00:05:29.966 EAL: Detected lcore 103 as core 21 on socket 1 00:05:29.966 EAL: Detected lcore 104 as core 22 on socket 1 00:05:29.966 EAL: Detected lcore 105 as core 24 on socket 1 00:05:29.966 EAL: Detected lcore 106 as core 25 on socket 1 00:05:29.966 EAL: Detected lcore 107 as core 26 on socket 1 00:05:29.966 EAL: Detected lcore 108 as core 27 on socket 1 00:05:29.966 EAL: Detected lcore 109 as core 28 on socket 1 00:05:29.966 EAL: Detected lcore 110 as core 29 on socket 1 00:05:29.966 EAL: Detected lcore 111 as core 30 on socket 1 00:05:29.966 EAL: Maximum logical cores by configuration: 128 00:05:29.967 EAL: Detected CPU lcores: 112 00:05:29.967 EAL: Detected NUMA nodes: 2 00:05:29.967 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:29.967 EAL: Detected shared linkage of DPDK 00:05:29.967 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:29.967 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:29.967 EAL: Registered [vdev] bus. 00:05:29.967 EAL: bus.vdev log level changed from disabled to notice 00:05:29.967 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:29.967 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:29.967 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:29.967 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:29.967 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:29.967 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:29.967 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:29.967 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:29.967 EAL: No shared files mode enabled, IPC will be disabled 00:05:29.967 EAL: No shared files mode enabled, IPC is disabled 00:05:29.967 EAL: Bus pci wants IOVA as 'DC' 00:05:29.967 EAL: Bus vdev wants IOVA as 'DC' 00:05:29.967 EAL: Buses did not request a specific IOVA mode. 00:05:29.967 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:29.967 EAL: Selected IOVA mode 'VA' 00:05:29.967 EAL: Probing VFIO support... 00:05:29.967 EAL: IOMMU type 1 (Type 1) is supported 00:05:29.967 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:29.967 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:29.967 EAL: VFIO support initialized 00:05:29.967 EAL: Ask a virtual area of 0x2e000 bytes 00:05:29.967 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:29.967 EAL: Setting up physically contiguous memory... 00:05:29.967 EAL: Setting maximum number of open files to 524288 00:05:29.967 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:29.967 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:29.967 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:29.967 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.967 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:29.967 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:29.967 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.967 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:29.967 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:29.967 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.967 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:29.967 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:29.967 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.967 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:29.967 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:29.967 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.967 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:29.967 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:29.967 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.967 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:29.967 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:29.967 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.967 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:29.967 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:29.967 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.967 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:29.967 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:29.967 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:29.967 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.967 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:29.967 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:29.967 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.967 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:29.967 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:29.967 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.967 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:29.967 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:29.967 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.967 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:29.967 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:29.967 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.967 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:29.967 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:29.967 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.967 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:29.967 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:29.967 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.967 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:29.967 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:29.967 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.967 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:29.967 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:29.967 EAL: Hugepages will be freed exactly as allocated. 00:05:29.967 EAL: No shared files mode enabled, IPC is disabled 00:05:29.967 EAL: No shared files mode enabled, IPC is disabled 00:05:29.967 EAL: TSC frequency is ~2500000 KHz 00:05:29.967 EAL: Main lcore 0 is ready (tid=7f7437088a00;cpuset=[0]) 00:05:29.967 EAL: Trying to obtain current memory policy. 00:05:29.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.967 EAL: Restoring previous memory policy: 0 00:05:29.967 EAL: request: mp_malloc_sync 00:05:29.967 EAL: No shared files mode enabled, IPC is disabled 00:05:29.967 EAL: Heap on socket 0 was expanded by 2MB 00:05:29.967 EAL: PCI device 0000:41:00.0 on NUMA socket 0 00:05:29.967 EAL: probe driver: 8086:37d2 net_i40e 00:05:29.967 EAL: Not managed by a supported kernel driver, skipped 00:05:29.967 EAL: PCI device 0000:41:00.1 on NUMA socket 0 00:05:29.967 EAL: probe driver: 8086:37d2 net_i40e 00:05:29.967 EAL: Not managed by a supported kernel driver, skipped 00:05:29.967 EAL: No shared files mode enabled, IPC is disabled 00:05:29.967 EAL: No shared files mode enabled, IPC is disabled 00:05:29.967 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:29.967 EAL: Mem event callback 'spdk:(nil)' registered 00:05:29.967 00:05:29.967 00:05:29.967 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.967 http://cunit.sourceforge.net/ 00:05:29.967 00:05:29.967 00:05:29.967 Suite: components_suite 00:05:29.967 Test: vtophys_malloc_test ...passed 00:05:29.967 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:29.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.967 EAL: Restoring previous memory policy: 4 00:05:29.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.967 EAL: request: mp_malloc_sync 00:05:29.967 EAL: No shared files mode enabled, IPC is disabled 00:05:29.967 EAL: Heap on socket 0 was expanded by 4MB 00:05:29.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.967 EAL: request: mp_malloc_sync 00:05:29.967 EAL: No shared files mode enabled, IPC is disabled 00:05:29.967 EAL: Heap on socket 0 was shrunk by 4MB 00:05:29.967 EAL: Trying to obtain current memory policy. 00:05:29.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.967 EAL: Restoring previous memory policy: 4 00:05:29.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.967 EAL: request: mp_malloc_sync 00:05:29.967 EAL: No shared files mode enabled, IPC is disabled 00:05:29.967 EAL: Heap on socket 0 was expanded by 6MB 00:05:29.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.967 EAL: request: mp_malloc_sync 00:05:29.967 EAL: No shared files mode enabled, IPC is disabled 00:05:29.967 EAL: Heap on socket 0 was shrunk by 6MB 00:05:29.967 EAL: Trying to obtain current memory policy. 00:05:29.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.967 EAL: Restoring previous memory policy: 4 00:05:29.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.967 EAL: request: mp_malloc_sync 00:05:29.967 EAL: No shared files mode enabled, IPC is disabled 00:05:29.967 EAL: Heap on socket 0 was expanded by 10MB 00:05:29.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.967 EAL: request: mp_malloc_sync 00:05:29.967 EAL: No shared files mode enabled, IPC is disabled 00:05:29.967 EAL: Heap on socket 0 was shrunk by 10MB 00:05:29.967 EAL: Trying to obtain current memory policy. 00:05:29.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.967 EAL: Restoring previous memory policy: 4 00:05:29.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.967 EAL: request: mp_malloc_sync 00:05:29.967 EAL: No shared files mode enabled, IPC is disabled 00:05:29.967 EAL: Heap on socket 0 was expanded by 18MB 00:05:29.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.967 EAL: request: mp_malloc_sync 00:05:29.967 EAL: No shared files mode enabled, IPC is disabled 00:05:29.967 EAL: Heap on socket 0 was shrunk by 18MB 00:05:29.967 EAL: Trying to obtain current memory policy. 00:05:29.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.967 EAL: Restoring previous memory policy: 4 00:05:29.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.967 EAL: request: mp_malloc_sync 00:05:29.967 EAL: No shared files mode enabled, IPC is disabled 00:05:29.967 EAL: Heap on socket 0 was expanded by 34MB 00:05:29.967 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.967 EAL: request: mp_malloc_sync 00:05:29.967 EAL: No shared files mode enabled, IPC is disabled 00:05:29.967 EAL: Heap on socket 0 was shrunk by 34MB 00:05:29.967 EAL: Trying to obtain current memory policy. 00:05:29.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.228 EAL: Restoring previous memory policy: 4 00:05:30.228 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.228 EAL: request: mp_malloc_sync 00:05:30.228 EAL: No shared files mode enabled, IPC is disabled 00:05:30.228 EAL: Heap on socket 0 was expanded by 66MB 00:05:30.228 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.228 EAL: request: mp_malloc_sync 00:05:30.228 EAL: No shared files mode enabled, IPC is disabled 00:05:30.228 EAL: Heap on socket 0 was shrunk by 66MB 00:05:30.228 EAL: Trying to obtain current memory policy. 00:05:30.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.228 EAL: Restoring previous memory policy: 4 00:05:30.228 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.228 EAL: request: mp_malloc_sync 00:05:30.228 EAL: No shared files mode enabled, IPC is disabled 00:05:30.228 EAL: Heap on socket 0 was expanded by 130MB 00:05:30.228 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.228 EAL: request: mp_malloc_sync 00:05:30.228 EAL: No shared files mode enabled, IPC is disabled 00:05:30.228 EAL: Heap on socket 0 was shrunk by 130MB 00:05:30.228 EAL: Trying to obtain current memory policy. 00:05:30.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.228 EAL: Restoring previous memory policy: 4 00:05:30.228 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.228 EAL: request: mp_malloc_sync 00:05:30.228 EAL: No shared files mode enabled, IPC is disabled 00:05:30.228 EAL: Heap on socket 0 was expanded by 258MB 00:05:30.228 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.228 EAL: request: mp_malloc_sync 00:05:30.228 EAL: No shared files mode enabled, IPC is disabled 00:05:30.228 EAL: Heap on socket 0 was shrunk by 258MB 00:05:30.228 EAL: Trying to obtain current memory policy. 00:05:30.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.488 EAL: Restoring previous memory policy: 4 00:05:30.488 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.488 EAL: request: mp_malloc_sync 00:05:30.488 EAL: No shared files mode enabled, IPC is disabled 00:05:30.488 EAL: Heap on socket 0 was expanded by 514MB 00:05:30.488 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.488 EAL: request: mp_malloc_sync 00:05:30.488 EAL: No shared files mode enabled, IPC is disabled 00:05:30.488 EAL: Heap on socket 0 was shrunk by 514MB 00:05:30.488 EAL: Trying to obtain current memory policy. 00:05:30.488 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.748 EAL: Restoring previous memory policy: 4 00:05:30.748 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.748 EAL: request: mp_malloc_sync 00:05:30.748 EAL: No shared files mode enabled, IPC is disabled 00:05:30.748 EAL: Heap on socket 0 was expanded by 1026MB 00:05:31.009 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.009 EAL: request: mp_malloc_sync 00:05:31.009 EAL: No shared files mode enabled, IPC is disabled 00:05:31.009 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:31.009 passed 00:05:31.009 00:05:31.009 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.009 suites 1 1 n/a 0 0 00:05:31.009 tests 2 2 2 0 0 00:05:31.009 asserts 497 497 497 0 n/a 00:05:31.009 00:05:31.009 Elapsed time = 0.978 seconds 00:05:31.009 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.009 EAL: request: mp_malloc_sync 00:05:31.009 EAL: No shared files mode enabled, IPC is disabled 00:05:31.009 EAL: Heap on socket 0 was shrunk by 2MB 00:05:31.009 EAL: No shared files mode enabled, IPC is disabled 00:05:31.009 EAL: No shared files mode enabled, IPC is disabled 00:05:31.009 EAL: No shared files mode enabled, IPC is disabled 00:05:31.009 00:05:31.009 real 0m1.133s 00:05:31.009 user 0m0.652s 00:05:31.009 sys 0m0.445s 00:05:31.009 19:00:05 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.009 19:00:05 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:31.009 ************************************ 00:05:31.009 END TEST env_vtophys 00:05:31.009 ************************************ 00:05:31.009 19:00:05 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:31.009 19:00:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.009 19:00:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.009 19:00:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.269 ************************************ 00:05:31.269 START TEST env_pci 00:05:31.269 ************************************ 00:05:31.269 19:00:05 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:31.269 00:05:31.269 00:05:31.269 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.269 http://cunit.sourceforge.net/ 00:05:31.269 00:05:31.269 00:05:31.269 Suite: pci 00:05:31.269 Test: pci_hook ...[2024-12-13 19:00:05.447188] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 106744 has claimed it 00:05:31.269 EAL: Cannot find device (10000:00:01.0) 00:05:31.269 EAL: Failed to attach device on primary process 00:05:31.269 passed 00:05:31.269 00:05:31.269 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.269 suites 1 1 n/a 0 0 00:05:31.269 tests 1 1 1 0 0 00:05:31.269 asserts 25 25 25 0 n/a 00:05:31.269 00:05:31.269 Elapsed time = 0.035 seconds 00:05:31.269 00:05:31.269 real 0m0.056s 00:05:31.269 user 0m0.014s 00:05:31.269 sys 0m0.042s 00:05:31.269 19:00:05 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.269 19:00:05 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:31.269 ************************************ 00:05:31.269 END TEST env_pci 00:05:31.269 ************************************ 00:05:31.269 19:00:05 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:31.269 19:00:05 env -- env/env.sh@15 -- # uname 00:05:31.269 19:00:05 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:31.269 19:00:05 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:31.269 19:00:05 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:31.269 19:00:05 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:31.269 19:00:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.269 19:00:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.269 ************************************ 00:05:31.269 START TEST env_dpdk_post_init 00:05:31.269 ************************************ 00:05:31.269 19:00:05 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:31.269 EAL: Detected CPU lcores: 112 00:05:31.269 EAL: Detected NUMA nodes: 2 00:05:31.269 EAL: Detected shared linkage of DPDK 00:05:31.269 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:31.269 EAL: Selected IOVA mode 'VA' 00:05:31.269 EAL: VFIO support initialized 00:05:31.529 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:31.529 EAL: Using IOMMU type 1 (Type 1) 00:05:31.529 EAL: Ignore mapping IO port bar(1) 00:05:31.530 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:31.530 EAL: Ignore mapping IO port bar(1) 00:05:31.530 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:31.530 EAL: Ignore mapping IO port bar(1) 00:05:31.530 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:31.530 EAL: Ignore mapping IO port bar(1) 00:05:31.530 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:31.530 EAL: Ignore mapping IO port bar(1) 00:05:31.530 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:31.530 EAL: Ignore mapping IO port bar(1) 00:05:31.530 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:31.530 EAL: Ignore mapping IO port bar(1) 00:05:31.530 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:31.530 EAL: Ignore mapping IO port bar(1) 00:05:31.530 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:31.530 EAL: Ignore mapping IO port bar(1) 00:05:31.530 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:31.530 EAL: Ignore mapping IO port bar(1) 00:05:31.530 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:31.530 EAL: Ignore mapping IO port bar(1) 00:05:31.530 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:31.530 EAL: Ignore mapping IO port bar(1) 00:05:31.530 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:31.530 EAL: Ignore mapping IO port bar(1) 00:05:31.530 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:31.530 EAL: Ignore mapping IO port bar(1) 00:05:31.530 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:31.790 EAL: Ignore mapping IO port bar(1) 00:05:31.790 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:31.790 EAL: Ignore mapping IO port bar(1) 00:05:31.790 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:32.360 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:36.559 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:36.559 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:05:36.819 Starting DPDK initialization... 00:05:36.819 Starting SPDK post initialization... 00:05:36.819 SPDK NVMe probe 00:05:36.819 Attaching to 0000:d8:00.0 00:05:36.819 Attached to 0000:d8:00.0 00:05:36.819 Cleaning up... 00:05:36.819 00:05:36.819 real 0m5.370s 00:05:36.819 user 0m4.002s 00:05:36.819 sys 0m0.436s 00:05:36.819 19:00:10 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.819 19:00:10 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:36.819 ************************************ 00:05:36.819 END TEST env_dpdk_post_init 00:05:36.819 ************************************ 00:05:36.819 19:00:10 env -- env/env.sh@26 -- # uname 00:05:36.819 19:00:11 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:36.819 19:00:11 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:36.819 19:00:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.819 19:00:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.819 19:00:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.819 ************************************ 00:05:36.819 START TEST env_mem_callbacks 00:05:36.819 ************************************ 00:05:36.819 19:00:11 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:36.819 EAL: Detected CPU lcores: 112 00:05:36.819 EAL: Detected NUMA nodes: 2 00:05:36.819 EAL: Detected shared linkage of DPDK 00:05:36.819 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:36.819 EAL: Selected IOVA mode 'VA' 00:05:36.819 EAL: VFIO support initialized 00:05:36.819 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:36.819 00:05:36.819 00:05:36.819 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.819 http://cunit.sourceforge.net/ 00:05:36.819 00:05:36.819 00:05:36.819 Suite: memory 00:05:36.819 Test: test ... 00:05:36.819 register 0x200000200000 2097152 00:05:36.819 malloc 3145728 00:05:36.819 register 0x200000400000 4194304 00:05:36.819 buf 0x200000500000 len 3145728 PASSED 00:05:36.819 malloc 64 00:05:36.819 buf 0x2000004fff40 len 64 PASSED 00:05:36.819 malloc 4194304 00:05:36.819 register 0x200000800000 6291456 00:05:36.819 buf 0x200000a00000 len 4194304 PASSED 00:05:36.819 free 0x200000500000 3145728 00:05:36.819 free 0x2000004fff40 64 00:05:36.819 unregister 0x200000400000 4194304 PASSED 00:05:36.819 free 0x200000a00000 4194304 00:05:36.819 unregister 0x200000800000 6291456 PASSED 00:05:36.819 malloc 8388608 00:05:36.819 register 0x200000400000 10485760 00:05:36.819 buf 0x200000600000 len 8388608 PASSED 00:05:36.819 free 0x200000600000 8388608 00:05:36.819 unregister 0x200000400000 10485760 PASSED 00:05:36.819 passed 00:05:36.819 00:05:36.819 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.819 suites 1 1 n/a 0 0 00:05:36.819 tests 1 1 1 0 0 00:05:36.819 asserts 15 15 15 0 n/a 00:05:36.819 00:05:36.819 Elapsed time = 0.008 seconds 00:05:36.819 00:05:36.819 real 0m0.067s 00:05:36.819 user 0m0.016s 00:05:36.819 sys 0m0.051s 00:05:36.819 19:00:11 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.819 19:00:11 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:36.819 ************************************ 00:05:36.819 END TEST env_mem_callbacks 00:05:36.819 ************************************ 00:05:36.819 00:05:36.819 real 0m7.414s 00:05:36.819 user 0m5.091s 00:05:36.819 sys 0m1.403s 00:05:36.819 19:00:11 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.819 19:00:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.819 ************************************ 00:05:36.819 END TEST env 00:05:36.819 ************************************ 00:05:37.079 19:00:11 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:37.079 19:00:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.079 19:00:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.079 19:00:11 -- common/autotest_common.sh@10 -- # set +x 00:05:37.079 ************************************ 00:05:37.079 START TEST rpc 00:05:37.079 ************************************ 00:05:37.079 19:00:11 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:37.079 * Looking for test storage... 00:05:37.079 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:37.079 19:00:11 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:37.079 19:00:11 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:37.079 19:00:11 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:37.079 19:00:11 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:37.079 19:00:11 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.079 19:00:11 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.079 19:00:11 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.080 19:00:11 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.080 19:00:11 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.080 19:00:11 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.080 19:00:11 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.080 19:00:11 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.080 19:00:11 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.080 19:00:11 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.080 19:00:11 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.080 19:00:11 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:37.080 19:00:11 rpc -- scripts/common.sh@345 -- # : 1 00:05:37.080 19:00:11 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.080 19:00:11 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.080 19:00:11 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:37.080 19:00:11 rpc -- scripts/common.sh@353 -- # local d=1 00:05:37.080 19:00:11 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.080 19:00:11 rpc -- scripts/common.sh@355 -- # echo 1 00:05:37.080 19:00:11 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.080 19:00:11 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:37.080 19:00:11 rpc -- scripts/common.sh@353 -- # local d=2 00:05:37.080 19:00:11 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.080 19:00:11 rpc -- scripts/common.sh@355 -- # echo 2 00:05:37.080 19:00:11 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.080 19:00:11 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.080 19:00:11 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.080 19:00:11 rpc -- scripts/common.sh@368 -- # return 0 00:05:37.080 19:00:11 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.080 19:00:11 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:37.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.080 --rc genhtml_branch_coverage=1 00:05:37.080 --rc genhtml_function_coverage=1 00:05:37.080 --rc genhtml_legend=1 00:05:37.080 --rc geninfo_all_blocks=1 00:05:37.080 --rc geninfo_unexecuted_blocks=1 00:05:37.080 00:05:37.080 ' 00:05:37.080 19:00:11 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:37.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.080 --rc genhtml_branch_coverage=1 00:05:37.080 --rc genhtml_function_coverage=1 00:05:37.080 --rc genhtml_legend=1 00:05:37.080 --rc geninfo_all_blocks=1 00:05:37.080 --rc geninfo_unexecuted_blocks=1 00:05:37.080 00:05:37.080 ' 00:05:37.080 19:00:11 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:37.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.080 --rc genhtml_branch_coverage=1 00:05:37.080 --rc genhtml_function_coverage=1 00:05:37.080 --rc genhtml_legend=1 00:05:37.080 --rc geninfo_all_blocks=1 00:05:37.080 --rc geninfo_unexecuted_blocks=1 00:05:37.080 00:05:37.080 ' 00:05:37.080 19:00:11 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:37.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.080 --rc genhtml_branch_coverage=1 00:05:37.080 --rc genhtml_function_coverage=1 00:05:37.080 --rc genhtml_legend=1 00:05:37.080 --rc geninfo_all_blocks=1 00:05:37.080 --rc geninfo_unexecuted_blocks=1 00:05:37.080 00:05:37.080 ' 00:05:37.080 19:00:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=107982 00:05:37.080 19:00:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.080 19:00:11 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:37.080 19:00:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 107982 00:05:37.080 19:00:11 rpc -- common/autotest_common.sh@835 -- # '[' -z 107982 ']' 00:05:37.080 19:00:11 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.080 19:00:11 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.080 19:00:11 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.080 19:00:11 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.080 19:00:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.340 [2024-12-13 19:00:11.490672] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:37.340 [2024-12-13 19:00:11.490719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107982 ] 00:05:37.340 [2024-12-13 19:00:11.578645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.340 [2024-12-13 19:00:11.600439] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:37.340 [2024-12-13 19:00:11.600475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 107982' to capture a snapshot of events at runtime. 00:05:37.340 [2024-12-13 19:00:11.600484] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:37.340 [2024-12-13 19:00:11.600492] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:37.340 [2024-12-13 19:00:11.600499] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid107982 for offline analysis/debug. 00:05:37.340 [2024-12-13 19:00:11.601126] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.600 19:00:11 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.600 19:00:11 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:37.600 19:00:11 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:37.600 19:00:11 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:37.600 19:00:11 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:37.600 19:00:11 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:37.600 19:00:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.600 19:00:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.600 19:00:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.600 ************************************ 00:05:37.600 START TEST rpc_integrity 00:05:37.600 ************************************ 00:05:37.600 19:00:11 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:37.600 19:00:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:37.600 19:00:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.600 19:00:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.600 19:00:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.600 19:00:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:37.600 19:00:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:37.600 19:00:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:37.600 19:00:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:37.600 19:00:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.600 19:00:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.600 19:00:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.600 19:00:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:37.600 19:00:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:37.600 19:00:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.600 19:00:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.600 19:00:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.601 19:00:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:37.601 { 00:05:37.601 "name": "Malloc0", 00:05:37.601 "aliases": [ 00:05:37.601 "b5c841cc-5d6d-440f-8be8-80691547e91c" 00:05:37.601 ], 00:05:37.601 "product_name": "Malloc disk", 00:05:37.601 "block_size": 512, 00:05:37.601 "num_blocks": 16384, 00:05:37.601 "uuid": "b5c841cc-5d6d-440f-8be8-80691547e91c", 00:05:37.601 "assigned_rate_limits": { 00:05:37.601 "rw_ios_per_sec": 0, 00:05:37.601 "rw_mbytes_per_sec": 0, 00:05:37.601 "r_mbytes_per_sec": 0, 00:05:37.601 "w_mbytes_per_sec": 0 00:05:37.601 }, 00:05:37.601 "claimed": false, 00:05:37.601 "zoned": false, 00:05:37.601 "supported_io_types": { 00:05:37.601 "read": true, 00:05:37.601 "write": true, 00:05:37.601 "unmap": true, 00:05:37.601 "flush": true, 00:05:37.601 "reset": true, 00:05:37.601 "nvme_admin": false, 00:05:37.601 "nvme_io": false, 00:05:37.601 "nvme_io_md": false, 00:05:37.601 "write_zeroes": true, 00:05:37.601 "zcopy": true, 00:05:37.601 "get_zone_info": false, 00:05:37.601 "zone_management": false, 00:05:37.601 "zone_append": false, 00:05:37.601 "compare": false, 00:05:37.601 "compare_and_write": false, 00:05:37.601 "abort": true, 00:05:37.601 "seek_hole": false, 00:05:37.601 "seek_data": false, 00:05:37.601 "copy": true, 00:05:37.601 "nvme_iov_md": false 00:05:37.601 }, 00:05:37.601 "memory_domains": [ 00:05:37.601 { 00:05:37.601 "dma_device_id": "system", 00:05:37.601 "dma_device_type": 1 00:05:37.601 }, 00:05:37.601 { 00:05:37.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.601 "dma_device_type": 2 00:05:37.601 } 00:05:37.601 ], 00:05:37.601 "driver_specific": {} 00:05:37.601 } 00:05:37.601 ]' 00:05:37.601 19:00:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:37.601 19:00:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:37.601 19:00:11 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:37.601 19:00:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.601 19:00:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.601 [2024-12-13 19:00:11.962413] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:37.601 [2024-12-13 19:00:11.962440] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:37.601 [2024-12-13 19:00:11.962453] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa892c0 00:05:37.601 [2024-12-13 19:00:11.962462] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:37.601 [2024-12-13 19:00:11.963534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:37.601 [2024-12-13 19:00:11.963555] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:37.601 Passthru0 00:05:37.601 19:00:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.601 19:00:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:37.601 19:00:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.601 19:00:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.861 19:00:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.861 19:00:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:37.861 { 00:05:37.861 "name": "Malloc0", 00:05:37.861 "aliases": [ 00:05:37.861 "b5c841cc-5d6d-440f-8be8-80691547e91c" 00:05:37.861 ], 00:05:37.861 "product_name": "Malloc disk", 00:05:37.861 "block_size": 512, 00:05:37.861 "num_blocks": 16384, 00:05:37.861 "uuid": "b5c841cc-5d6d-440f-8be8-80691547e91c", 00:05:37.861 "assigned_rate_limits": { 00:05:37.861 "rw_ios_per_sec": 0, 00:05:37.861 "rw_mbytes_per_sec": 0, 00:05:37.861 "r_mbytes_per_sec": 0, 00:05:37.861 "w_mbytes_per_sec": 0 00:05:37.861 }, 00:05:37.861 "claimed": true, 00:05:37.861 "claim_type": "exclusive_write", 00:05:37.861 "zoned": false, 00:05:37.861 "supported_io_types": { 00:05:37.861 "read": true, 00:05:37.861 "write": true, 00:05:37.861 "unmap": true, 00:05:37.861 "flush": true, 00:05:37.861 "reset": true, 00:05:37.861 "nvme_admin": false, 00:05:37.861 "nvme_io": false, 00:05:37.861 "nvme_io_md": false, 00:05:37.861 "write_zeroes": true, 00:05:37.861 "zcopy": true, 00:05:37.861 "get_zone_info": false, 00:05:37.861 "zone_management": false, 00:05:37.861 "zone_append": false, 00:05:37.861 "compare": false, 00:05:37.861 "compare_and_write": false, 00:05:37.861 "abort": true, 00:05:37.861 "seek_hole": false, 00:05:37.861 "seek_data": false, 00:05:37.861 "copy": true, 00:05:37.861 "nvme_iov_md": false 00:05:37.861 }, 00:05:37.861 "memory_domains": [ 00:05:37.861 { 00:05:37.861 "dma_device_id": "system", 00:05:37.861 "dma_device_type": 1 00:05:37.861 }, 00:05:37.861 { 00:05:37.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.861 "dma_device_type": 2 00:05:37.861 } 00:05:37.861 ], 00:05:37.861 "driver_specific": {} 00:05:37.861 }, 00:05:37.861 { 00:05:37.861 "name": "Passthru0", 00:05:37.861 "aliases": [ 00:05:37.861 "33f84eb1-d332-57ce-8f9b-117a3f286053" 00:05:37.861 ], 00:05:37.861 "product_name": "passthru", 00:05:37.861 "block_size": 512, 00:05:37.861 "num_blocks": 16384, 00:05:37.861 "uuid": "33f84eb1-d332-57ce-8f9b-117a3f286053", 00:05:37.861 "assigned_rate_limits": { 00:05:37.861 "rw_ios_per_sec": 0, 00:05:37.861 "rw_mbytes_per_sec": 0, 00:05:37.861 "r_mbytes_per_sec": 0, 00:05:37.861 "w_mbytes_per_sec": 0 00:05:37.861 }, 00:05:37.861 "claimed": false, 00:05:37.861 "zoned": false, 00:05:37.861 "supported_io_types": { 00:05:37.861 "read": true, 00:05:37.861 "write": true, 00:05:37.861 "unmap": true, 00:05:37.861 "flush": true, 00:05:37.861 "reset": true, 00:05:37.861 "nvme_admin": false, 00:05:37.861 "nvme_io": false, 00:05:37.861 "nvme_io_md": false, 00:05:37.861 "write_zeroes": true, 00:05:37.861 "zcopy": true, 00:05:37.861 "get_zone_info": false, 00:05:37.861 "zone_management": false, 00:05:37.861 "zone_append": false, 00:05:37.861 "compare": false, 00:05:37.861 "compare_and_write": false, 00:05:37.861 "abort": true, 00:05:37.861 "seek_hole": false, 00:05:37.861 "seek_data": false, 00:05:37.861 "copy": true, 00:05:37.861 "nvme_iov_md": false 00:05:37.861 }, 00:05:37.861 "memory_domains": [ 00:05:37.861 { 00:05:37.861 "dma_device_id": "system", 00:05:37.861 "dma_device_type": 1 00:05:37.861 }, 00:05:37.861 { 00:05:37.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.861 "dma_device_type": 2 00:05:37.861 } 00:05:37.861 ], 00:05:37.861 "driver_specific": { 00:05:37.862 "passthru": { 00:05:37.862 "name": "Passthru0", 00:05:37.862 "base_bdev_name": "Malloc0" 00:05:37.862 } 00:05:37.862 } 00:05:37.862 } 00:05:37.862 ]' 00:05:37.862 19:00:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:37.862 19:00:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:37.862 19:00:12 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:37.862 19:00:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.862 19:00:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.862 19:00:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.862 19:00:12 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:37.862 19:00:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.862 19:00:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.862 19:00:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.862 19:00:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:37.862 19:00:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.862 19:00:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.862 19:00:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.862 19:00:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:37.862 19:00:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:37.862 19:00:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:37.862 00:05:37.862 real 0m0.279s 00:05:37.862 user 0m0.164s 00:05:37.862 sys 0m0.052s 00:05:37.862 19:00:12 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.862 19:00:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.862 ************************************ 00:05:37.862 END TEST rpc_integrity 00:05:37.862 ************************************ 00:05:37.862 19:00:12 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:37.862 19:00:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.862 19:00:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.862 19:00:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.862 ************************************ 00:05:37.862 START TEST rpc_plugins 00:05:37.862 ************************************ 00:05:37.862 19:00:12 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:37.862 19:00:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:37.862 19:00:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.862 19:00:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.862 19:00:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.862 19:00:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:37.862 19:00:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:37.862 19:00:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.862 19:00:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.862 19:00:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.862 19:00:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:37.862 { 00:05:37.862 "name": "Malloc1", 00:05:37.862 "aliases": [ 00:05:37.862 "09e8df78-5d5e-4e35-ad01-b56ecac22e4f" 00:05:37.862 ], 00:05:37.862 "product_name": "Malloc disk", 00:05:37.862 "block_size": 4096, 00:05:37.862 "num_blocks": 256, 00:05:37.862 "uuid": "09e8df78-5d5e-4e35-ad01-b56ecac22e4f", 00:05:37.862 "assigned_rate_limits": { 00:05:37.862 "rw_ios_per_sec": 0, 00:05:37.862 "rw_mbytes_per_sec": 0, 00:05:37.862 "r_mbytes_per_sec": 0, 00:05:37.862 "w_mbytes_per_sec": 0 00:05:37.862 }, 00:05:37.862 "claimed": false, 00:05:37.862 "zoned": false, 00:05:37.862 "supported_io_types": { 00:05:37.862 "read": true, 00:05:37.862 "write": true, 00:05:37.862 "unmap": true, 00:05:37.862 "flush": true, 00:05:37.862 "reset": true, 00:05:37.862 "nvme_admin": false, 00:05:37.862 "nvme_io": false, 00:05:37.862 "nvme_io_md": false, 00:05:37.862 "write_zeroes": true, 00:05:37.862 "zcopy": true, 00:05:37.862 "get_zone_info": false, 00:05:37.862 "zone_management": false, 00:05:37.862 "zone_append": false, 00:05:37.862 "compare": false, 00:05:37.862 "compare_and_write": false, 00:05:37.862 "abort": true, 00:05:37.862 "seek_hole": false, 00:05:37.862 "seek_data": false, 00:05:37.862 "copy": true, 00:05:37.862 "nvme_iov_md": false 00:05:37.862 }, 00:05:37.862 "memory_domains": [ 00:05:37.862 { 00:05:37.862 "dma_device_id": "system", 00:05:37.862 "dma_device_type": 1 00:05:37.862 }, 00:05:37.862 { 00:05:37.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.862 "dma_device_type": 2 00:05:37.862 } 00:05:37.862 ], 00:05:37.862 "driver_specific": {} 00:05:37.862 } 00:05:37.862 ]' 00:05:37.862 19:00:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:38.122 19:00:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:38.122 19:00:12 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:38.122 19:00:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.122 19:00:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.122 19:00:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.122 19:00:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:38.122 19:00:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.122 19:00:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.122 19:00:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.122 19:00:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:38.122 19:00:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:38.122 19:00:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:38.122 00:05:38.122 real 0m0.146s 00:05:38.122 user 0m0.077s 00:05:38.122 sys 0m0.032s 00:05:38.122 19:00:12 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.122 19:00:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.122 ************************************ 00:05:38.122 END TEST rpc_plugins 00:05:38.122 ************************************ 00:05:38.122 19:00:12 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:38.122 19:00:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.122 19:00:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.122 19:00:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.122 ************************************ 00:05:38.122 START TEST rpc_trace_cmd_test 00:05:38.122 ************************************ 00:05:38.122 19:00:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:38.122 19:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:38.122 19:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:38.122 19:00:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.122 19:00:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:38.122 19:00:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.122 19:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:38.122 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid107982", 00:05:38.122 "tpoint_group_mask": "0x8", 00:05:38.122 "iscsi_conn": { 00:05:38.122 "mask": "0x2", 00:05:38.122 "tpoint_mask": "0x0" 00:05:38.122 }, 00:05:38.122 "scsi": { 00:05:38.122 "mask": "0x4", 00:05:38.122 "tpoint_mask": "0x0" 00:05:38.122 }, 00:05:38.122 "bdev": { 00:05:38.122 "mask": "0x8", 00:05:38.122 "tpoint_mask": "0xffffffffffffffff" 00:05:38.122 }, 00:05:38.122 "nvmf_rdma": { 00:05:38.122 "mask": "0x10", 00:05:38.122 "tpoint_mask": "0x0" 00:05:38.122 }, 00:05:38.122 "nvmf_tcp": { 00:05:38.122 "mask": "0x20", 00:05:38.122 "tpoint_mask": "0x0" 00:05:38.122 }, 00:05:38.122 "ftl": { 00:05:38.122 "mask": "0x40", 00:05:38.122 "tpoint_mask": "0x0" 00:05:38.122 }, 00:05:38.122 "blobfs": { 00:05:38.122 "mask": "0x80", 00:05:38.122 "tpoint_mask": "0x0" 00:05:38.122 }, 00:05:38.122 "dsa": { 00:05:38.122 "mask": "0x200", 00:05:38.122 "tpoint_mask": "0x0" 00:05:38.122 }, 00:05:38.122 "thread": { 00:05:38.122 "mask": "0x400", 00:05:38.122 "tpoint_mask": "0x0" 00:05:38.122 }, 00:05:38.122 "nvme_pcie": { 00:05:38.122 "mask": "0x800", 00:05:38.122 "tpoint_mask": "0x0" 00:05:38.122 }, 00:05:38.122 "iaa": { 00:05:38.122 "mask": "0x1000", 00:05:38.122 "tpoint_mask": "0x0" 00:05:38.122 }, 00:05:38.122 "nvme_tcp": { 00:05:38.122 "mask": "0x2000", 00:05:38.122 "tpoint_mask": "0x0" 00:05:38.123 }, 00:05:38.123 "bdev_nvme": { 00:05:38.123 "mask": "0x4000", 00:05:38.123 "tpoint_mask": "0x0" 00:05:38.123 }, 00:05:38.123 "sock": { 00:05:38.123 "mask": "0x8000", 00:05:38.123 "tpoint_mask": "0x0" 00:05:38.123 }, 00:05:38.123 "blob": { 00:05:38.123 "mask": "0x10000", 00:05:38.123 "tpoint_mask": "0x0" 00:05:38.123 }, 00:05:38.123 "bdev_raid": { 00:05:38.123 "mask": "0x20000", 00:05:38.123 "tpoint_mask": "0x0" 00:05:38.123 }, 00:05:38.123 "scheduler": { 00:05:38.123 "mask": "0x40000", 00:05:38.123 "tpoint_mask": "0x0" 00:05:38.123 } 00:05:38.123 }' 00:05:38.123 19:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:38.123 19:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:38.123 19:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:38.383 19:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:38.383 19:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:38.383 19:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:38.383 19:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:38.383 19:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:38.383 19:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:38.383 19:00:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:38.383 00:05:38.383 real 0m0.218s 00:05:38.383 user 0m0.174s 00:05:38.383 sys 0m0.036s 00:05:38.383 19:00:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.383 19:00:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:38.383 ************************************ 00:05:38.383 END TEST rpc_trace_cmd_test 00:05:38.383 ************************************ 00:05:38.383 19:00:12 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:38.383 19:00:12 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:38.383 19:00:12 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:38.383 19:00:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.383 19:00:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.383 19:00:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.383 ************************************ 00:05:38.383 START TEST rpc_daemon_integrity 00:05:38.383 ************************************ 00:05:38.383 19:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:38.383 19:00:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:38.383 19:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.383 19:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.383 19:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.383 19:00:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:38.383 19:00:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:38.644 19:00:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:38.644 19:00:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:38.644 19:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.644 19:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.644 19:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.644 19:00:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:38.644 19:00:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:38.644 19:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.644 19:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.644 19:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.644 19:00:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:38.644 { 00:05:38.644 "name": "Malloc2", 00:05:38.644 "aliases": [ 00:05:38.644 "de044690-3746-438c-ace7-e195371543e3" 00:05:38.644 ], 00:05:38.644 "product_name": "Malloc disk", 00:05:38.644 "block_size": 512, 00:05:38.644 "num_blocks": 16384, 00:05:38.644 "uuid": "de044690-3746-438c-ace7-e195371543e3", 00:05:38.644 "assigned_rate_limits": { 00:05:38.644 "rw_ios_per_sec": 0, 00:05:38.644 "rw_mbytes_per_sec": 0, 00:05:38.644 "r_mbytes_per_sec": 0, 00:05:38.644 "w_mbytes_per_sec": 0 00:05:38.644 }, 00:05:38.644 "claimed": false, 00:05:38.644 "zoned": false, 00:05:38.644 "supported_io_types": { 00:05:38.644 "read": true, 00:05:38.644 "write": true, 00:05:38.644 "unmap": true, 00:05:38.644 "flush": true, 00:05:38.644 "reset": true, 00:05:38.644 "nvme_admin": false, 00:05:38.644 "nvme_io": false, 00:05:38.644 "nvme_io_md": false, 00:05:38.644 "write_zeroes": true, 00:05:38.644 "zcopy": true, 00:05:38.644 "get_zone_info": false, 00:05:38.644 "zone_management": false, 00:05:38.644 "zone_append": false, 00:05:38.644 "compare": false, 00:05:38.644 "compare_and_write": false, 00:05:38.644 "abort": true, 00:05:38.644 "seek_hole": false, 00:05:38.644 "seek_data": false, 00:05:38.644 "copy": true, 00:05:38.644 "nvme_iov_md": false 00:05:38.644 }, 00:05:38.644 "memory_domains": [ 00:05:38.644 { 00:05:38.644 "dma_device_id": "system", 00:05:38.644 "dma_device_type": 1 00:05:38.644 }, 00:05:38.644 { 00:05:38.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.644 "dma_device_type": 2 00:05:38.644 } 00:05:38.644 ], 00:05:38.644 "driver_specific": {} 00:05:38.644 } 00:05:38.644 ]' 00:05:38.644 19:00:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:38.644 19:00:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:38.644 19:00:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:38.644 19:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.644 19:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.644 [2024-12-13 19:00:12.860814] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:38.644 [2024-12-13 19:00:12.860840] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:38.644 [2024-12-13 19:00:12.860856] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x946480 00:05:38.644 [2024-12-13 19:00:12.860864] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:38.644 [2024-12-13 19:00:12.861775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:38.644 [2024-12-13 19:00:12.861797] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:38.644 Passthru0 00:05:38.644 19:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.644 19:00:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:38.644 19:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.644 19:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.644 19:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.644 19:00:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:38.644 { 00:05:38.644 "name": "Malloc2", 00:05:38.644 "aliases": [ 00:05:38.644 "de044690-3746-438c-ace7-e195371543e3" 00:05:38.644 ], 00:05:38.644 "product_name": "Malloc disk", 00:05:38.644 "block_size": 512, 00:05:38.644 "num_blocks": 16384, 00:05:38.644 "uuid": "de044690-3746-438c-ace7-e195371543e3", 00:05:38.644 "assigned_rate_limits": { 00:05:38.644 "rw_ios_per_sec": 0, 00:05:38.644 "rw_mbytes_per_sec": 0, 00:05:38.644 "r_mbytes_per_sec": 0, 00:05:38.644 "w_mbytes_per_sec": 0 00:05:38.644 }, 00:05:38.644 "claimed": true, 00:05:38.644 "claim_type": "exclusive_write", 00:05:38.644 "zoned": false, 00:05:38.644 "supported_io_types": { 00:05:38.644 "read": true, 00:05:38.644 "write": true, 00:05:38.644 "unmap": true, 00:05:38.644 "flush": true, 00:05:38.644 "reset": true, 00:05:38.644 "nvme_admin": false, 00:05:38.644 "nvme_io": false, 00:05:38.644 "nvme_io_md": false, 00:05:38.644 "write_zeroes": true, 00:05:38.644 "zcopy": true, 00:05:38.644 "get_zone_info": false, 00:05:38.644 "zone_management": false, 00:05:38.644 "zone_append": false, 00:05:38.644 "compare": false, 00:05:38.644 "compare_and_write": false, 00:05:38.644 "abort": true, 00:05:38.644 "seek_hole": false, 00:05:38.644 "seek_data": false, 00:05:38.644 "copy": true, 00:05:38.644 "nvme_iov_md": false 00:05:38.644 }, 00:05:38.644 "memory_domains": [ 00:05:38.644 { 00:05:38.644 "dma_device_id": "system", 00:05:38.644 "dma_device_type": 1 00:05:38.644 }, 00:05:38.644 { 00:05:38.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.644 "dma_device_type": 2 00:05:38.644 } 00:05:38.644 ], 00:05:38.644 "driver_specific": {} 00:05:38.644 }, 00:05:38.644 { 00:05:38.644 "name": "Passthru0", 00:05:38.644 "aliases": [ 00:05:38.644 "cd788ae4-36f6-5fb4-b420-3fbbcfce3f7b" 00:05:38.644 ], 00:05:38.644 "product_name": "passthru", 00:05:38.644 "block_size": 512, 00:05:38.644 "num_blocks": 16384, 00:05:38.644 "uuid": "cd788ae4-36f6-5fb4-b420-3fbbcfce3f7b", 00:05:38.644 "assigned_rate_limits": { 00:05:38.644 "rw_ios_per_sec": 0, 00:05:38.644 "rw_mbytes_per_sec": 0, 00:05:38.644 "r_mbytes_per_sec": 0, 00:05:38.644 "w_mbytes_per_sec": 0 00:05:38.644 }, 00:05:38.644 "claimed": false, 00:05:38.644 "zoned": false, 00:05:38.644 "supported_io_types": { 00:05:38.644 "read": true, 00:05:38.644 "write": true, 00:05:38.644 "unmap": true, 00:05:38.644 "flush": true, 00:05:38.644 "reset": true, 00:05:38.644 "nvme_admin": false, 00:05:38.644 "nvme_io": false, 00:05:38.644 "nvme_io_md": false, 00:05:38.644 "write_zeroes": true, 00:05:38.644 "zcopy": true, 00:05:38.644 "get_zone_info": false, 00:05:38.644 "zone_management": false, 00:05:38.644 "zone_append": false, 00:05:38.644 "compare": false, 00:05:38.644 "compare_and_write": false, 00:05:38.644 "abort": true, 00:05:38.644 "seek_hole": false, 00:05:38.645 "seek_data": false, 00:05:38.645 "copy": true, 00:05:38.645 "nvme_iov_md": false 00:05:38.645 }, 00:05:38.645 "memory_domains": [ 00:05:38.645 { 00:05:38.645 "dma_device_id": "system", 00:05:38.645 "dma_device_type": 1 00:05:38.645 }, 00:05:38.645 { 00:05:38.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.645 "dma_device_type": 2 00:05:38.645 } 00:05:38.645 ], 00:05:38.645 "driver_specific": { 00:05:38.645 "passthru": { 00:05:38.645 "name": "Passthru0", 00:05:38.645 "base_bdev_name": "Malloc2" 00:05:38.645 } 00:05:38.645 } 00:05:38.645 } 00:05:38.645 ]' 00:05:38.645 19:00:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:38.645 19:00:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:38.645 19:00:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:38.645 19:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.645 19:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.645 19:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.645 19:00:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:38.645 19:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.645 19:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.645 19:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.645 19:00:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:38.645 19:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.645 19:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.645 19:00:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.645 19:00:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:38.645 19:00:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:38.645 19:00:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:38.645 00:05:38.645 real 0m0.285s 00:05:38.645 user 0m0.170s 00:05:38.645 sys 0m0.053s 00:05:38.645 19:00:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.645 19:00:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.645 ************************************ 00:05:38.645 END TEST rpc_daemon_integrity 00:05:38.645 ************************************ 00:05:38.905 19:00:13 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:38.905 19:00:13 rpc -- rpc/rpc.sh@84 -- # killprocess 107982 00:05:38.905 19:00:13 rpc -- common/autotest_common.sh@954 -- # '[' -z 107982 ']' 00:05:38.905 19:00:13 rpc -- common/autotest_common.sh@958 -- # kill -0 107982 00:05:38.905 19:00:13 rpc -- common/autotest_common.sh@959 -- # uname 00:05:38.905 19:00:13 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.905 19:00:13 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107982 00:05:38.905 19:00:13 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.905 19:00:13 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.905 19:00:13 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107982' 00:05:38.905 killing process with pid 107982 00:05:38.905 19:00:13 rpc -- common/autotest_common.sh@973 -- # kill 107982 00:05:38.905 19:00:13 rpc -- common/autotest_common.sh@978 -- # wait 107982 00:05:39.166 00:05:39.166 real 0m2.158s 00:05:39.166 user 0m2.698s 00:05:39.166 sys 0m0.826s 00:05:39.166 19:00:13 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.166 19:00:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.166 ************************************ 00:05:39.166 END TEST rpc 00:05:39.166 ************************************ 00:05:39.166 19:00:13 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:39.166 19:00:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.166 19:00:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.166 19:00:13 -- common/autotest_common.sh@10 -- # set +x 00:05:39.166 ************************************ 00:05:39.166 START TEST skip_rpc 00:05:39.166 ************************************ 00:05:39.166 19:00:13 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:39.426 * Looking for test storage... 00:05:39.426 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:39.426 19:00:13 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:39.426 19:00:13 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:39.426 19:00:13 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:39.426 19:00:13 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.426 19:00:13 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:39.426 19:00:13 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.426 19:00:13 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:39.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.426 --rc genhtml_branch_coverage=1 00:05:39.426 --rc genhtml_function_coverage=1 00:05:39.426 --rc genhtml_legend=1 00:05:39.426 --rc geninfo_all_blocks=1 00:05:39.426 --rc geninfo_unexecuted_blocks=1 00:05:39.426 00:05:39.426 ' 00:05:39.426 19:00:13 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:39.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.426 --rc genhtml_branch_coverage=1 00:05:39.426 --rc genhtml_function_coverage=1 00:05:39.426 --rc genhtml_legend=1 00:05:39.426 --rc geninfo_all_blocks=1 00:05:39.426 --rc geninfo_unexecuted_blocks=1 00:05:39.426 00:05:39.426 ' 00:05:39.426 19:00:13 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:39.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.426 --rc genhtml_branch_coverage=1 00:05:39.426 --rc genhtml_function_coverage=1 00:05:39.426 --rc genhtml_legend=1 00:05:39.426 --rc geninfo_all_blocks=1 00:05:39.426 --rc geninfo_unexecuted_blocks=1 00:05:39.426 00:05:39.426 ' 00:05:39.426 19:00:13 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:39.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.426 --rc genhtml_branch_coverage=1 00:05:39.426 --rc genhtml_function_coverage=1 00:05:39.426 --rc genhtml_legend=1 00:05:39.426 --rc geninfo_all_blocks=1 00:05:39.426 --rc geninfo_unexecuted_blocks=1 00:05:39.426 00:05:39.426 ' 00:05:39.426 19:00:13 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:39.426 19:00:13 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:39.426 19:00:13 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:39.426 19:00:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.426 19:00:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.426 19:00:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.426 ************************************ 00:05:39.426 START TEST skip_rpc 00:05:39.426 ************************************ 00:05:39.426 19:00:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:39.426 19:00:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=108503 00:05:39.426 19:00:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.426 19:00:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:39.426 19:00:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:39.426 [2024-12-13 19:00:13.765140] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:39.426 [2024-12-13 19:00:13.765183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108503 ] 00:05:39.687 [2024-12-13 19:00:13.853717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.687 [2024-12-13 19:00:13.876078] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 108503 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 108503 ']' 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 108503 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108503 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108503' 00:05:44.968 killing process with pid 108503 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 108503 00:05:44.968 19:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 108503 00:05:44.968 00:05:44.968 real 0m5.373s 00:05:44.968 user 0m5.104s 00:05:44.968 sys 0m0.314s 00:05:44.968 19:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.968 19:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.968 ************************************ 00:05:44.968 END TEST skip_rpc 00:05:44.968 ************************************ 00:05:44.968 19:00:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:44.968 19:00:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.969 19:00:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.969 19:00:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.969 ************************************ 00:05:44.969 START TEST skip_rpc_with_json 00:05:44.969 ************************************ 00:05:44.969 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:44.969 19:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:44.969 19:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=109526 00:05:44.969 19:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.969 19:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.969 19:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 109526 00:05:44.969 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 109526 ']' 00:05:44.969 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.969 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.969 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.969 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.969 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:44.969 [2024-12-13 19:00:19.223005] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:44.969 [2024-12-13 19:00:19.223058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109526 ] 00:05:44.969 [2024-12-13 19:00:19.315403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.969 [2024-12-13 19:00:19.333949] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.229 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.229 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:45.229 19:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:45.229 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.229 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:45.229 [2024-12-13 19:00:19.550062] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:45.229 request: 00:05:45.229 { 00:05:45.229 "trtype": "tcp", 00:05:45.229 "method": "nvmf_get_transports", 00:05:45.229 "req_id": 1 00:05:45.229 } 00:05:45.229 Got JSON-RPC error response 00:05:45.229 response: 00:05:45.229 { 00:05:45.229 "code": -19, 00:05:45.229 "message": "No such device" 00:05:45.229 } 00:05:45.229 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:45.229 19:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:45.229 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.229 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:45.229 [2024-12-13 19:00:19.562162] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:45.229 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.229 19:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:45.229 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.229 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:45.490 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.490 19:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:45.490 { 00:05:45.490 "subsystems": [ 00:05:45.490 { 00:05:45.490 "subsystem": "fsdev", 00:05:45.490 "config": [ 00:05:45.490 { 00:05:45.490 "method": "fsdev_set_opts", 00:05:45.490 "params": { 00:05:45.490 "fsdev_io_pool_size": 65535, 00:05:45.490 "fsdev_io_cache_size": 256 00:05:45.490 } 00:05:45.490 } 00:05:45.490 ] 00:05:45.490 }, 00:05:45.490 { 00:05:45.490 "subsystem": "keyring", 00:05:45.490 "config": [] 00:05:45.490 }, 00:05:45.490 { 00:05:45.490 "subsystem": "iobuf", 00:05:45.490 "config": [ 00:05:45.490 { 00:05:45.490 "method": "iobuf_set_options", 00:05:45.490 "params": { 00:05:45.490 "small_pool_count": 8192, 00:05:45.490 "large_pool_count": 1024, 00:05:45.490 "small_bufsize": 8192, 00:05:45.490 "large_bufsize": 135168, 00:05:45.490 "enable_numa": false 00:05:45.490 } 00:05:45.490 } 00:05:45.490 ] 00:05:45.490 }, 00:05:45.490 { 00:05:45.490 "subsystem": "sock", 00:05:45.490 "config": [ 00:05:45.490 { 00:05:45.490 "method": "sock_set_default_impl", 00:05:45.490 "params": { 00:05:45.490 "impl_name": "posix" 00:05:45.490 } 00:05:45.490 }, 00:05:45.490 { 00:05:45.490 "method": "sock_impl_set_options", 00:05:45.490 "params": { 00:05:45.490 "impl_name": "ssl", 00:05:45.490 "recv_buf_size": 4096, 00:05:45.490 "send_buf_size": 4096, 00:05:45.490 "enable_recv_pipe": true, 00:05:45.490 "enable_quickack": false, 00:05:45.490 "enable_placement_id": 0, 00:05:45.490 "enable_zerocopy_send_server": true, 00:05:45.490 "enable_zerocopy_send_client": false, 00:05:45.490 "zerocopy_threshold": 0, 00:05:45.490 "tls_version": 0, 00:05:45.490 "enable_ktls": false 00:05:45.490 } 00:05:45.490 }, 00:05:45.490 { 00:05:45.490 "method": "sock_impl_set_options", 00:05:45.490 "params": { 00:05:45.490 "impl_name": "posix", 00:05:45.490 "recv_buf_size": 2097152, 00:05:45.490 "send_buf_size": 2097152, 00:05:45.490 "enable_recv_pipe": true, 00:05:45.490 "enable_quickack": false, 00:05:45.490 "enable_placement_id": 0, 00:05:45.490 "enable_zerocopy_send_server": true, 00:05:45.490 "enable_zerocopy_send_client": false, 00:05:45.490 "zerocopy_threshold": 0, 00:05:45.490 "tls_version": 0, 00:05:45.490 "enable_ktls": false 00:05:45.490 } 00:05:45.490 } 00:05:45.490 ] 00:05:45.490 }, 00:05:45.490 { 00:05:45.490 "subsystem": "vmd", 00:05:45.490 "config": [] 00:05:45.490 }, 00:05:45.490 { 00:05:45.490 "subsystem": "accel", 00:05:45.490 "config": [ 00:05:45.490 { 00:05:45.490 "method": "accel_set_options", 00:05:45.490 "params": { 00:05:45.490 "small_cache_size": 128, 00:05:45.490 "large_cache_size": 16, 00:05:45.490 "task_count": 2048, 00:05:45.490 "sequence_count": 2048, 00:05:45.490 "buf_count": 2048 00:05:45.490 } 00:05:45.490 } 00:05:45.490 ] 00:05:45.490 }, 00:05:45.490 { 00:05:45.490 "subsystem": "bdev", 00:05:45.490 "config": [ 00:05:45.490 { 00:05:45.490 "method": "bdev_set_options", 00:05:45.490 "params": { 00:05:45.490 "bdev_io_pool_size": 65535, 00:05:45.490 "bdev_io_cache_size": 256, 00:05:45.490 "bdev_auto_examine": true, 00:05:45.490 "iobuf_small_cache_size": 128, 00:05:45.490 "iobuf_large_cache_size": 16 00:05:45.490 } 00:05:45.490 }, 00:05:45.490 { 00:05:45.490 "method": "bdev_raid_set_options", 00:05:45.490 "params": { 00:05:45.490 "process_window_size_kb": 1024, 00:05:45.490 "process_max_bandwidth_mb_sec": 0 00:05:45.490 } 00:05:45.490 }, 00:05:45.490 { 00:05:45.490 "method": "bdev_iscsi_set_options", 00:05:45.490 "params": { 00:05:45.490 "timeout_sec": 30 00:05:45.490 } 00:05:45.490 }, 00:05:45.490 { 00:05:45.490 "method": "bdev_nvme_set_options", 00:05:45.490 "params": { 00:05:45.490 "action_on_timeout": "none", 00:05:45.490 "timeout_us": 0, 00:05:45.490 "timeout_admin_us": 0, 00:05:45.490 "keep_alive_timeout_ms": 10000, 00:05:45.490 "arbitration_burst": 0, 00:05:45.490 "low_priority_weight": 0, 00:05:45.490 "medium_priority_weight": 0, 00:05:45.490 "high_priority_weight": 0, 00:05:45.490 "nvme_adminq_poll_period_us": 10000, 00:05:45.490 "nvme_ioq_poll_period_us": 0, 00:05:45.490 "io_queue_requests": 0, 00:05:45.490 "delay_cmd_submit": true, 00:05:45.490 "transport_retry_count": 4, 00:05:45.490 "bdev_retry_count": 3, 00:05:45.490 "transport_ack_timeout": 0, 00:05:45.490 "ctrlr_loss_timeout_sec": 0, 00:05:45.490 "reconnect_delay_sec": 0, 00:05:45.490 "fast_io_fail_timeout_sec": 0, 00:05:45.490 "disable_auto_failback": false, 00:05:45.490 "generate_uuids": false, 00:05:45.490 "transport_tos": 0, 00:05:45.490 "nvme_error_stat": false, 00:05:45.490 "rdma_srq_size": 0, 00:05:45.490 "io_path_stat": false, 00:05:45.490 "allow_accel_sequence": false, 00:05:45.490 "rdma_max_cq_size": 0, 00:05:45.490 "rdma_cm_event_timeout_ms": 0, 00:05:45.490 "dhchap_digests": [ 00:05:45.490 "sha256", 00:05:45.490 "sha384", 00:05:45.490 "sha512" 00:05:45.490 ], 00:05:45.490 "dhchap_dhgroups": [ 00:05:45.490 "null", 00:05:45.490 "ffdhe2048", 00:05:45.490 "ffdhe3072", 00:05:45.490 "ffdhe4096", 00:05:45.490 "ffdhe6144", 00:05:45.490 "ffdhe8192" 00:05:45.490 ], 00:05:45.490 "rdma_umr_per_io": false 00:05:45.490 } 00:05:45.490 }, 00:05:45.490 { 00:05:45.490 "method": "bdev_nvme_set_hotplug", 00:05:45.490 "params": { 00:05:45.490 "period_us": 100000, 00:05:45.490 "enable": false 00:05:45.490 } 00:05:45.490 }, 00:05:45.490 { 00:05:45.490 "method": "bdev_wait_for_examine" 00:05:45.490 } 00:05:45.490 ] 00:05:45.490 }, 00:05:45.490 { 00:05:45.490 "subsystem": "scsi", 00:05:45.490 "config": null 00:05:45.490 }, 00:05:45.490 { 00:05:45.490 "subsystem": "scheduler", 00:05:45.490 "config": [ 00:05:45.490 { 00:05:45.490 "method": "framework_set_scheduler", 00:05:45.490 "params": { 00:05:45.490 "name": "static" 00:05:45.490 } 00:05:45.490 } 00:05:45.490 ] 00:05:45.490 }, 00:05:45.490 { 00:05:45.490 "subsystem": "vhost_scsi", 00:05:45.490 "config": [] 00:05:45.490 }, 00:05:45.490 { 00:05:45.490 "subsystem": "vhost_blk", 00:05:45.490 "config": [] 00:05:45.490 }, 00:05:45.490 { 00:05:45.490 "subsystem": "ublk", 00:05:45.490 "config": [] 00:05:45.490 }, 00:05:45.490 { 00:05:45.490 "subsystem": "nbd", 00:05:45.490 "config": [] 00:05:45.490 }, 00:05:45.490 { 00:05:45.490 "subsystem": "nvmf", 00:05:45.490 "config": [ 00:05:45.490 { 00:05:45.490 "method": "nvmf_set_config", 00:05:45.490 "params": { 00:05:45.490 "discovery_filter": "match_any", 00:05:45.490 "admin_cmd_passthru": { 00:05:45.490 "identify_ctrlr": false 00:05:45.490 }, 00:05:45.490 "dhchap_digests": [ 00:05:45.490 "sha256", 00:05:45.490 "sha384", 00:05:45.490 "sha512" 00:05:45.490 ], 00:05:45.490 "dhchap_dhgroups": [ 00:05:45.490 "null", 00:05:45.490 "ffdhe2048", 00:05:45.490 "ffdhe3072", 00:05:45.490 "ffdhe4096", 00:05:45.490 "ffdhe6144", 00:05:45.490 "ffdhe8192" 00:05:45.490 ] 00:05:45.490 } 00:05:45.490 }, 00:05:45.490 { 00:05:45.490 "method": "nvmf_set_max_subsystems", 00:05:45.490 "params": { 00:05:45.490 "max_subsystems": 1024 00:05:45.490 } 00:05:45.490 }, 00:05:45.490 { 00:05:45.490 "method": "nvmf_set_crdt", 00:05:45.490 "params": { 00:05:45.490 "crdt1": 0, 00:05:45.490 "crdt2": 0, 00:05:45.490 "crdt3": 0 00:05:45.490 } 00:05:45.490 }, 00:05:45.490 { 00:05:45.490 "method": "nvmf_create_transport", 00:05:45.490 "params": { 00:05:45.490 "trtype": "TCP", 00:05:45.490 "max_queue_depth": 128, 00:05:45.490 "max_io_qpairs_per_ctrlr": 127, 00:05:45.490 "in_capsule_data_size": 4096, 00:05:45.490 "max_io_size": 131072, 00:05:45.490 "io_unit_size": 131072, 00:05:45.490 "max_aq_depth": 128, 00:05:45.490 "num_shared_buffers": 511, 00:05:45.490 "buf_cache_size": 4294967295, 00:05:45.490 "dif_insert_or_strip": false, 00:05:45.490 "zcopy": false, 00:05:45.490 "c2h_success": true, 00:05:45.490 "sock_priority": 0, 00:05:45.490 "abort_timeout_sec": 1, 00:05:45.490 "ack_timeout": 0, 00:05:45.490 "data_wr_pool_size": 0 00:05:45.490 } 00:05:45.490 } 00:05:45.490 ] 00:05:45.490 }, 00:05:45.490 { 00:05:45.490 "subsystem": "iscsi", 00:05:45.490 "config": [ 00:05:45.490 { 00:05:45.490 "method": "iscsi_set_options", 00:05:45.490 "params": { 00:05:45.490 "node_base": "iqn.2016-06.io.spdk", 00:05:45.490 "max_sessions": 128, 00:05:45.490 "max_connections_per_session": 2, 00:05:45.490 "max_queue_depth": 64, 00:05:45.490 "default_time2wait": 2, 00:05:45.490 "default_time2retain": 20, 00:05:45.490 "first_burst_length": 8192, 00:05:45.490 "immediate_data": true, 00:05:45.490 "allow_duplicated_isid": false, 00:05:45.490 "error_recovery_level": 0, 00:05:45.490 "nop_timeout": 60, 00:05:45.490 "nop_in_interval": 30, 00:05:45.490 "disable_chap": false, 00:05:45.490 "require_chap": false, 00:05:45.490 "mutual_chap": false, 00:05:45.490 "chap_group": 0, 00:05:45.490 "max_large_datain_per_connection": 64, 00:05:45.490 "max_r2t_per_connection": 4, 00:05:45.490 "pdu_pool_size": 36864, 00:05:45.490 "immediate_data_pool_size": 16384, 00:05:45.490 "data_out_pool_size": 2048 00:05:45.490 } 00:05:45.490 } 00:05:45.490 ] 00:05:45.490 } 00:05:45.490 ] 00:05:45.490 } 00:05:45.490 19:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:45.490 19:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 109526 00:05:45.490 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 109526 ']' 00:05:45.490 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 109526 00:05:45.490 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:45.490 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.490 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109526 00:05:45.490 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.490 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.490 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109526' 00:05:45.490 killing process with pid 109526 00:05:45.490 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 109526 00:05:45.490 19:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 109526 00:05:45.750 19:00:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=109739 00:05:45.750 19:00:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:45.750 19:00:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:51.049 19:00:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 109739 00:05:51.049 19:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 109739 ']' 00:05:51.049 19:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 109739 00:05:51.049 19:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:51.049 19:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.049 19:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109739 00:05:51.049 19:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.049 19:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.049 19:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109739' 00:05:51.049 killing process with pid 109739 00:05:51.049 19:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 109739 00:05:51.049 19:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 109739 00:05:51.310 19:00:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:51.310 19:00:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:51.310 00:05:51.310 real 0m6.303s 00:05:51.310 user 0m5.949s 00:05:51.310 sys 0m0.698s 00:05:51.310 19:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.310 19:00:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:51.310 ************************************ 00:05:51.310 END TEST skip_rpc_with_json 00:05:51.310 ************************************ 00:05:51.310 19:00:25 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:51.310 19:00:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.310 19:00:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.310 19:00:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.310 ************************************ 00:05:51.310 START TEST skip_rpc_with_delay 00:05:51.310 ************************************ 00:05:51.310 19:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:51.310 19:00:25 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:51.310 19:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:51.310 19:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:51.310 19:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.310 19:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.310 19:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.310 19:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.310 19:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.310 19:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.310 19:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.310 19:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:51.310 19:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:51.310 [2024-12-13 19:00:25.616775] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:51.310 19:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:51.310 19:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:51.310 19:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:51.310 19:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:51.310 00:05:51.310 real 0m0.072s 00:05:51.310 user 0m0.042s 00:05:51.310 sys 0m0.030s 00:05:51.310 19:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.310 19:00:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:51.310 ************************************ 00:05:51.310 END TEST skip_rpc_with_delay 00:05:51.310 ************************************ 00:05:51.310 19:00:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:51.310 19:00:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:51.310 19:00:25 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:51.310 19:00:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.310 19:00:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.310 19:00:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.591 ************************************ 00:05:51.591 START TEST exit_on_failed_rpc_init 00:05:51.591 ************************************ 00:05:51.591 19:00:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:51.592 19:00:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=110661 00:05:51.592 19:00:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 110661 00:05:51.592 19:00:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.592 19:00:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 110661 ']' 00:05:51.592 19:00:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.592 19:00:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.592 19:00:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.592 19:00:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.592 19:00:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:51.592 [2024-12-13 19:00:25.776263] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:51.592 [2024-12-13 19:00:25.776313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110661 ] 00:05:51.592 [2024-12-13 19:00:25.870766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.592 [2024-12-13 19:00:25.893224] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.853 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.853 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:51.853 19:00:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.853 19:00:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:51.853 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:51.853 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:51.853 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.853 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.853 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.853 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.853 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.853 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.853 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.853 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:51.853 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:51.853 [2024-12-13 19:00:26.158270] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:51.853 [2024-12-13 19:00:26.158321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110801 ] 00:05:52.114 [2024-12-13 19:00:26.247107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.114 [2024-12-13 19:00:26.269178] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.114 [2024-12-13 19:00:26.269233] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:52.114 [2024-12-13 19:00:26.269245] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:52.114 [2024-12-13 19:00:26.269253] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:52.114 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:52.114 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:52.114 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:52.114 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:52.114 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:52.114 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:52.114 19:00:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:52.114 19:00:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 110661 00:05:52.114 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 110661 ']' 00:05:52.114 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 110661 00:05:52.114 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:52.114 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.114 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110661 00:05:52.114 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.114 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.114 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110661' 00:05:52.114 killing process with pid 110661 00:05:52.114 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 110661 00:05:52.114 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 110661 00:05:52.375 00:05:52.375 real 0m0.940s 00:05:52.375 user 0m0.946s 00:05:52.375 sys 0m0.447s 00:05:52.375 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.375 19:00:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:52.375 ************************************ 00:05:52.375 END TEST exit_on_failed_rpc_init 00:05:52.375 ************************************ 00:05:52.375 19:00:26 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:52.375 00:05:52.375 real 0m13.233s 00:05:52.375 user 0m12.265s 00:05:52.375 sys 0m1.853s 00:05:52.375 19:00:26 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.375 19:00:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.375 ************************************ 00:05:52.375 END TEST skip_rpc 00:05:52.375 ************************************ 00:05:52.636 19:00:26 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:52.636 19:00:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.636 19:00:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.636 19:00:26 -- common/autotest_common.sh@10 -- # set +x 00:05:52.636 ************************************ 00:05:52.636 START TEST rpc_client 00:05:52.636 ************************************ 00:05:52.636 19:00:26 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:52.636 * Looking for test storage... 00:05:52.636 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:52.636 19:00:26 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:52.636 19:00:26 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:52.636 19:00:26 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:52.636 19:00:26 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:52.636 19:00:26 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.636 19:00:26 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.636 19:00:26 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.636 19:00:26 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.636 19:00:26 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.636 19:00:26 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.636 19:00:26 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.636 19:00:26 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.636 19:00:26 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.636 19:00:26 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.636 19:00:26 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.636 19:00:26 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:52.636 19:00:26 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:52.636 19:00:26 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.636 19:00:26 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.636 19:00:26 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:52.636 19:00:26 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:52.636 19:00:26 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.636 19:00:26 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:52.636 19:00:26 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.636 19:00:26 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:52.636 19:00:26 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:52.637 19:00:26 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.637 19:00:26 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:52.637 19:00:26 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.637 19:00:26 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.637 19:00:26 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.637 19:00:26 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:52.637 19:00:26 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.637 19:00:26 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:52.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.637 --rc genhtml_branch_coverage=1 00:05:52.637 --rc genhtml_function_coverage=1 00:05:52.637 --rc genhtml_legend=1 00:05:52.637 --rc geninfo_all_blocks=1 00:05:52.637 --rc geninfo_unexecuted_blocks=1 00:05:52.637 00:05:52.637 ' 00:05:52.637 19:00:26 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:52.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.637 --rc genhtml_branch_coverage=1 00:05:52.637 --rc genhtml_function_coverage=1 00:05:52.637 --rc genhtml_legend=1 00:05:52.637 --rc geninfo_all_blocks=1 00:05:52.637 --rc geninfo_unexecuted_blocks=1 00:05:52.637 00:05:52.637 ' 00:05:52.637 19:00:26 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:52.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.637 --rc genhtml_branch_coverage=1 00:05:52.637 --rc genhtml_function_coverage=1 00:05:52.637 --rc genhtml_legend=1 00:05:52.637 --rc geninfo_all_blocks=1 00:05:52.637 --rc geninfo_unexecuted_blocks=1 00:05:52.637 00:05:52.637 ' 00:05:52.637 19:00:26 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:52.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.637 --rc genhtml_branch_coverage=1 00:05:52.637 --rc genhtml_function_coverage=1 00:05:52.637 --rc genhtml_legend=1 00:05:52.637 --rc geninfo_all_blocks=1 00:05:52.637 --rc geninfo_unexecuted_blocks=1 00:05:52.637 00:05:52.637 ' 00:05:52.637 19:00:26 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:52.637 OK 00:05:52.637 19:00:27 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:52.898 00:05:52.898 real 0m0.222s 00:05:52.898 user 0m0.121s 00:05:52.898 sys 0m0.120s 00:05:52.898 19:00:27 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.898 19:00:27 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:52.898 ************************************ 00:05:52.898 END TEST rpc_client 00:05:52.898 ************************************ 00:05:52.898 19:00:27 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:52.898 19:00:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.898 19:00:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.898 19:00:27 -- common/autotest_common.sh@10 -- # set +x 00:05:52.898 ************************************ 00:05:52.898 START TEST json_config 00:05:52.898 ************************************ 00:05:52.898 19:00:27 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:52.898 19:00:27 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:52.898 19:00:27 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:52.898 19:00:27 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:52.898 19:00:27 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:52.898 19:00:27 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.898 19:00:27 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.898 19:00:27 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.898 19:00:27 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.898 19:00:27 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.898 19:00:27 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.898 19:00:27 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.898 19:00:27 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.898 19:00:27 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.898 19:00:27 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.898 19:00:27 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.898 19:00:27 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:52.898 19:00:27 json_config -- scripts/common.sh@345 -- # : 1 00:05:52.898 19:00:27 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.898 19:00:27 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.898 19:00:27 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:52.898 19:00:27 json_config -- scripts/common.sh@353 -- # local d=1 00:05:52.898 19:00:27 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.898 19:00:27 json_config -- scripts/common.sh@355 -- # echo 1 00:05:52.898 19:00:27 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.898 19:00:27 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:52.898 19:00:27 json_config -- scripts/common.sh@353 -- # local d=2 00:05:52.899 19:00:27 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.899 19:00:27 json_config -- scripts/common.sh@355 -- # echo 2 00:05:52.899 19:00:27 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.899 19:00:27 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.899 19:00:27 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.899 19:00:27 json_config -- scripts/common.sh@368 -- # return 0 00:05:52.899 19:00:27 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.899 19:00:27 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:52.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.899 --rc genhtml_branch_coverage=1 00:05:52.899 --rc genhtml_function_coverage=1 00:05:52.899 --rc genhtml_legend=1 00:05:52.899 --rc geninfo_all_blocks=1 00:05:52.899 --rc geninfo_unexecuted_blocks=1 00:05:52.899 00:05:52.899 ' 00:05:52.899 19:00:27 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:52.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.899 --rc genhtml_branch_coverage=1 00:05:52.899 --rc genhtml_function_coverage=1 00:05:52.899 --rc genhtml_legend=1 00:05:52.899 --rc geninfo_all_blocks=1 00:05:52.899 --rc geninfo_unexecuted_blocks=1 00:05:52.899 00:05:52.899 ' 00:05:52.899 19:00:27 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:52.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.899 --rc genhtml_branch_coverage=1 00:05:52.899 --rc genhtml_function_coverage=1 00:05:52.899 --rc genhtml_legend=1 00:05:52.899 --rc geninfo_all_blocks=1 00:05:52.899 --rc geninfo_unexecuted_blocks=1 00:05:52.899 00:05:52.899 ' 00:05:52.899 19:00:27 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:52.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.899 --rc genhtml_branch_coverage=1 00:05:52.899 --rc genhtml_function_coverage=1 00:05:52.899 --rc genhtml_legend=1 00:05:52.899 --rc geninfo_all_blocks=1 00:05:52.899 --rc geninfo_unexecuted_blocks=1 00:05:52.899 00:05:52.899 ' 00:05:52.899 19:00:27 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:52.899 19:00:27 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:52.899 19:00:27 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:52.899 19:00:27 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:52.899 19:00:27 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:52.899 19:00:27 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:52.899 19:00:27 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:52.899 19:00:27 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:52.899 19:00:27 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:52.899 19:00:27 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:52.899 19:00:27 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:52.899 19:00:27 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:53.160 19:00:27 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:53.160 19:00:27 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:53.160 19:00:27 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:53.160 19:00:27 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:53.160 19:00:27 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:53.160 19:00:27 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:53.160 19:00:27 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:53.160 19:00:27 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:53.160 19:00:27 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:53.160 19:00:27 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:53.160 19:00:27 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:53.160 19:00:27 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.160 19:00:27 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.160 19:00:27 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.160 19:00:27 json_config -- paths/export.sh@5 -- # export PATH 00:05:53.160 19:00:27 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.160 19:00:27 json_config -- nvmf/common.sh@51 -- # : 0 00:05:53.160 19:00:27 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:53.160 19:00:27 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:53.160 19:00:27 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:53.160 19:00:27 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:53.160 19:00:27 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:53.160 19:00:27 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:53.160 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:53.160 19:00:27 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:53.160 19:00:27 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:53.160 19:00:27 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:53.160 19:00:27 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:53.160 19:00:27 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:53.160 19:00:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:53.160 19:00:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:53.160 19:00:27 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:53.160 19:00:27 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:53.161 19:00:27 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:53.161 19:00:27 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:53.161 19:00:27 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:53.161 19:00:27 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:53.161 19:00:27 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:53.161 19:00:27 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:53.161 19:00:27 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:53.161 19:00:27 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:53.161 19:00:27 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:53.161 19:00:27 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:53.161 INFO: JSON configuration test init 00:05:53.161 19:00:27 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:53.161 19:00:27 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:53.161 19:00:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.161 19:00:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.161 19:00:27 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:53.161 19:00:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.161 19:00:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.161 19:00:27 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:53.161 19:00:27 json_config -- json_config/common.sh@9 -- # local app=target 00:05:53.161 19:00:27 json_config -- json_config/common.sh@10 -- # shift 00:05:53.161 19:00:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:53.161 19:00:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:53.161 19:00:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:53.161 19:00:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:53.161 19:00:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:53.161 19:00:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=111058 00:05:53.161 19:00:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:53.161 Waiting for target to run... 00:05:53.161 19:00:27 json_config -- json_config/common.sh@25 -- # waitforlisten 111058 /var/tmp/spdk_tgt.sock 00:05:53.161 19:00:27 json_config -- common/autotest_common.sh@835 -- # '[' -z 111058 ']' 00:05:53.161 19:00:27 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:53.161 19:00:27 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:53.161 19:00:27 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.161 19:00:27 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:53.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:53.161 19:00:27 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.161 19:00:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.161 [2024-12-13 19:00:27.367755] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:53.161 [2024-12-13 19:00:27.367809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111058 ] 00:05:53.731 [2024-12-13 19:00:27.827032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.731 [2024-12-13 19:00:27.847214] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.991 19:00:28 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.991 19:00:28 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:53.991 19:00:28 json_config -- json_config/common.sh@26 -- # echo '' 00:05:53.991 00:05:53.991 19:00:28 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:53.991 19:00:28 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:53.991 19:00:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.991 19:00:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.991 19:00:28 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:53.991 19:00:28 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:53.991 19:00:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:53.992 19:00:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.992 19:00:28 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:53.992 19:00:28 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:53.992 19:00:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:57.291 19:00:31 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:57.291 19:00:31 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:57.291 19:00:31 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:57.291 19:00:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.291 19:00:31 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:57.291 19:00:31 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:57.291 19:00:31 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:57.291 19:00:31 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:57.291 19:00:31 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:57.291 19:00:31 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:57.291 19:00:31 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:57.291 19:00:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:57.291 19:00:31 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:57.291 19:00:31 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:57.291 19:00:31 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:57.291 19:00:31 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:57.291 19:00:31 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:57.291 19:00:31 json_config -- json_config/json_config.sh@54 -- # sort 00:05:57.291 19:00:31 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:57.291 19:00:31 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:57.291 19:00:31 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:57.291 19:00:31 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:57.291 19:00:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:57.291 19:00:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.291 19:00:31 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:57.291 19:00:31 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:57.291 19:00:31 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:57.291 19:00:31 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:57.291 19:00:31 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:57.292 19:00:31 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:57.292 19:00:31 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:57.292 19:00:31 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:57.292 19:00:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.292 19:00:31 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:57.292 19:00:31 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:05:57.292 19:00:31 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:05:57.292 19:00:31 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:05:57.292 19:00:31 json_config -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:05:57.292 19:00:31 json_config -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:57.292 19:00:31 json_config -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:57.292 19:00:31 json_config -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:57.292 19:00:31 json_config -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:57.292 19:00:31 json_config -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:57.292 19:00:31 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:57.292 19:00:31 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:57.292 19:00:31 json_config -- nvmf/common.sh@442 -- # [[ phy-fallback != virt ]] 00:05:57.292 19:00:31 json_config -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:57.292 19:00:31 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:05:57.292 19:00:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@320 -- # e810=() 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@321 -- # x722=() 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@322 -- # mlx=() 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:06:05.448 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:06:05.448 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:06:05.448 Found net devices under 0000:d9:00.0: mlx_0_0 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:06:05.448 Found net devices under 0000:d9:00.1: mlx_0_1 00:06:05.448 19:00:38 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@442 -- # is_hw=yes 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@448 -- # rdma_device_init 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@62 -- # uname 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@530 -- # allocate_nic_ips 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@109 -- # continue 2 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@109 -- # continue 2 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@78 -- # ip= 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@79 -- # [[ -z '' ]] 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@80 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@81 -- # ip link set mlx_0_0 up 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@82 -- # (( count = count + 1 )) 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:05.449 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:05.449 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:06:05.449 altname enp217s0f0np0 00:06:05.449 altname ens818f0np0 00:06:05.449 inet 192.168.100.8/24 scope global mlx_0_0 00:06:05.449 valid_lft forever preferred_lft forever 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@78 -- # ip= 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@79 -- # [[ -z '' ]] 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@80 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@81 -- # ip link set mlx_0_1 up 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@82 -- # (( count = count + 1 )) 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:05.449 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:05.449 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:06:05.449 altname enp217s0f1np1 00:06:05.449 altname ens818f1np1 00:06:05.449 inet 192.168.100.9/24 scope global mlx_0_1 00:06:05.449 valid_lft forever preferred_lft forever 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@450 -- # return 0 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@109 -- # continue 2 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@109 -- # continue 2 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:06:05.449 192.168.100.9' 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:06:05.449 192.168.100.9' 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@485 -- # head -n 1 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:06:05.449 192.168.100.9' 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@486 -- # tail -n +2 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@486 -- # head -n 1 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:06:05.449 19:00:38 json_config -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:06:05.449 19:00:38 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:06:05.449 19:00:38 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:05.449 19:00:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:05.449 MallocForNvmf0 00:06:05.450 19:00:39 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:05.450 19:00:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:05.450 MallocForNvmf1 00:06:05.450 19:00:39 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:06:05.450 19:00:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:06:05.450 [2024-12-13 19:00:39.425707] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:06:05.450 [2024-12-13 19:00:39.489605] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a6bd00/0x19427c0) succeed. 00:06:05.450 [2024-12-13 19:00:39.502319] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a6def0/0x1983e60) succeed. 00:06:05.450 19:00:39 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:05.450 19:00:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:05.450 19:00:39 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:05.450 19:00:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:05.710 19:00:39 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:05.710 19:00:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:05.970 19:00:40 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:05.970 19:00:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:05.970 [2024-12-13 19:00:40.344258] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:06.230 19:00:40 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:06.230 19:00:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:06.230 19:00:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.230 19:00:40 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:06.230 19:00:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:06.230 19:00:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.230 19:00:40 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:06.230 19:00:40 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:06.230 19:00:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:06.490 MallocBdevForConfigChangeCheck 00:06:06.490 19:00:40 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:06.490 19:00:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:06.490 19:00:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.490 19:00:40 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:06.490 19:00:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:06.751 19:00:41 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:06.751 INFO: shutting down applications... 00:06:06.751 19:00:41 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:06.751 19:00:41 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:06.751 19:00:41 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:06.751 19:00:41 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:09.293 Calling clear_iscsi_subsystem 00:06:09.293 Calling clear_nvmf_subsystem 00:06:09.293 Calling clear_nbd_subsystem 00:06:09.293 Calling clear_ublk_subsystem 00:06:09.293 Calling clear_vhost_blk_subsystem 00:06:09.293 Calling clear_vhost_scsi_subsystem 00:06:09.293 Calling clear_bdev_subsystem 00:06:09.293 19:00:43 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:06:09.293 19:00:43 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:09.293 19:00:43 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:09.293 19:00:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:09.293 19:00:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:09.294 19:00:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:09.554 19:00:43 json_config -- json_config/json_config.sh@352 -- # break 00:06:09.554 19:00:43 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:09.554 19:00:43 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:09.815 19:00:43 json_config -- json_config/common.sh@31 -- # local app=target 00:06:09.815 19:00:43 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:09.815 19:00:43 json_config -- json_config/common.sh@35 -- # [[ -n 111058 ]] 00:06:09.815 19:00:43 json_config -- json_config/common.sh@38 -- # kill -SIGINT 111058 00:06:09.815 19:00:43 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:09.815 19:00:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.815 19:00:43 json_config -- json_config/common.sh@41 -- # kill -0 111058 00:06:09.815 19:00:43 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:10.075 19:00:44 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:10.075 19:00:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.075 19:00:44 json_config -- json_config/common.sh@41 -- # kill -0 111058 00:06:10.075 19:00:44 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:10.075 19:00:44 json_config -- json_config/common.sh@43 -- # break 00:06:10.075 19:00:44 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:10.075 19:00:44 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:10.075 SPDK target shutdown done 00:06:10.075 19:00:44 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:10.075 INFO: relaunching applications... 00:06:10.075 19:00:44 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.075 19:00:44 json_config -- json_config/common.sh@9 -- # local app=target 00:06:10.075 19:00:44 json_config -- json_config/common.sh@10 -- # shift 00:06:10.075 19:00:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:10.075 19:00:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:10.075 19:00:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:10.075 19:00:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.075 19:00:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.075 19:00:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=116295 00:06:10.075 19:00:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:10.075 Waiting for target to run... 00:06:10.075 19:00:44 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.075 19:00:44 json_config -- json_config/common.sh@25 -- # waitforlisten 116295 /var/tmp/spdk_tgt.sock 00:06:10.075 19:00:44 json_config -- common/autotest_common.sh@835 -- # '[' -z 116295 ']' 00:06:10.076 19:00:44 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:10.076 19:00:44 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.076 19:00:44 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:10.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:10.076 19:00:44 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.076 19:00:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.336 [2024-12-13 19:00:44.494548] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:10.336 [2024-12-13 19:00:44.494608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116295 ] 00:06:10.596 [2024-12-13 19:00:44.951968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.866 [2024-12-13 19:00:44.974205] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.165 [2024-12-13 19:00:48.025604] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x26b0d90/0x27e7d60) succeed. 00:06:14.165 [2024-12-13 19:00:48.036636] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x26b3f90/0x2829400) succeed. 00:06:14.165 [2024-12-13 19:00:48.085230] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:14.425 19:00:48 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.425 19:00:48 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:14.425 19:00:48 json_config -- json_config/common.sh@26 -- # echo '' 00:06:14.425 00:06:14.425 19:00:48 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:14.425 19:00:48 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:14.425 INFO: Checking if target configuration is the same... 00:06:14.425 19:00:48 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:14.425 19:00:48 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:14.425 19:00:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:14.425 + '[' 2 -ne 2 ']' 00:06:14.425 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:14.425 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:14.425 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:14.425 +++ basename /dev/fd/62 00:06:14.425 ++ mktemp /tmp/62.XXX 00:06:14.425 + tmp_file_1=/tmp/62.JGG 00:06:14.425 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:14.425 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:14.425 + tmp_file_2=/tmp/spdk_tgt_config.json.i2u 00:06:14.425 + ret=0 00:06:14.425 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:14.686 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:14.946 + diff -u /tmp/62.JGG /tmp/spdk_tgt_config.json.i2u 00:06:14.946 + echo 'INFO: JSON config files are the same' 00:06:14.946 INFO: JSON config files are the same 00:06:14.946 + rm /tmp/62.JGG /tmp/spdk_tgt_config.json.i2u 00:06:14.946 + exit 0 00:06:14.946 19:00:49 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:14.946 19:00:49 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:14.946 INFO: changing configuration and checking if this can be detected... 00:06:14.946 19:00:49 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:14.946 19:00:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:15.207 19:00:49 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:15.207 19:00:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:15.207 19:00:49 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.207 + '[' 2 -ne 2 ']' 00:06:15.207 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:15.207 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:15.207 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:15.207 +++ basename /dev/fd/62 00:06:15.207 ++ mktemp /tmp/62.XXX 00:06:15.207 + tmp_file_1=/tmp/62.jv2 00:06:15.207 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.207 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:15.207 + tmp_file_2=/tmp/spdk_tgt_config.json.4Sz 00:06:15.207 + ret=0 00:06:15.207 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:15.467 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:15.467 + diff -u /tmp/62.jv2 /tmp/spdk_tgt_config.json.4Sz 00:06:15.467 + ret=1 00:06:15.467 + echo '=== Start of file: /tmp/62.jv2 ===' 00:06:15.467 + cat /tmp/62.jv2 00:06:15.467 + echo '=== End of file: /tmp/62.jv2 ===' 00:06:15.467 + echo '' 00:06:15.467 + echo '=== Start of file: /tmp/spdk_tgt_config.json.4Sz ===' 00:06:15.467 + cat /tmp/spdk_tgt_config.json.4Sz 00:06:15.467 + echo '=== End of file: /tmp/spdk_tgt_config.json.4Sz ===' 00:06:15.467 + echo '' 00:06:15.467 + rm /tmp/62.jv2 /tmp/spdk_tgt_config.json.4Sz 00:06:15.467 + exit 1 00:06:15.467 19:00:49 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:15.467 INFO: configuration change detected. 00:06:15.467 19:00:49 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:15.467 19:00:49 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:15.467 19:00:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:15.467 19:00:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.467 19:00:49 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:15.467 19:00:49 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:15.467 19:00:49 json_config -- json_config/json_config.sh@324 -- # [[ -n 116295 ]] 00:06:15.467 19:00:49 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:15.467 19:00:49 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:15.467 19:00:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:15.467 19:00:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.467 19:00:49 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:15.467 19:00:49 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:15.467 19:00:49 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:15.467 19:00:49 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:15.467 19:00:49 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:15.467 19:00:49 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:15.467 19:00:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:15.467 19:00:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.467 19:00:49 json_config -- json_config/json_config.sh@330 -- # killprocess 116295 00:06:15.467 19:00:49 json_config -- common/autotest_common.sh@954 -- # '[' -z 116295 ']' 00:06:15.467 19:00:49 json_config -- common/autotest_common.sh@958 -- # kill -0 116295 00:06:15.467 19:00:49 json_config -- common/autotest_common.sh@959 -- # uname 00:06:15.467 19:00:49 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.467 19:00:49 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116295 00:06:15.728 19:00:49 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.728 19:00:49 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.728 19:00:49 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116295' 00:06:15.728 killing process with pid 116295 00:06:15.728 19:00:49 json_config -- common/autotest_common.sh@973 -- # kill 116295 00:06:15.728 19:00:49 json_config -- common/autotest_common.sh@978 -- # wait 116295 00:06:18.271 19:00:52 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.271 19:00:52 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:18.271 19:00:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:18.271 19:00:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.271 19:00:52 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:18.271 19:00:52 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:18.271 INFO: Success 00:06:18.271 19:00:52 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:06:18.271 19:00:52 json_config -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:18.271 19:00:52 json_config -- nvmf/common.sh@121 -- # sync 00:06:18.271 19:00:52 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:06:18.271 19:00:52 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:06:18.271 19:00:52 json_config -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:06:18.271 19:00:52 json_config -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:18.271 19:00:52 json_config -- nvmf/common.sh@523 -- # [[ '' == \t\c\p ]] 00:06:18.271 00:06:18.271 real 0m25.327s 00:06:18.271 user 0m28.087s 00:06:18.271 sys 0m8.046s 00:06:18.271 19:00:52 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.271 19:00:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.271 ************************************ 00:06:18.271 END TEST json_config 00:06:18.271 ************************************ 00:06:18.271 19:00:52 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:18.271 19:00:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.271 19:00:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.271 19:00:52 -- common/autotest_common.sh@10 -- # set +x 00:06:18.271 ************************************ 00:06:18.271 START TEST json_config_extra_key 00:06:18.271 ************************************ 00:06:18.271 19:00:52 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:18.271 19:00:52 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:18.272 19:00:52 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:06:18.272 19:00:52 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:18.533 19:00:52 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:18.533 19:00:52 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.533 19:00:52 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.534 19:00:52 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:18.534 19:00:52 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.534 19:00:52 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:18.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.534 --rc genhtml_branch_coverage=1 00:06:18.534 --rc genhtml_function_coverage=1 00:06:18.534 --rc genhtml_legend=1 00:06:18.534 --rc geninfo_all_blocks=1 00:06:18.534 --rc geninfo_unexecuted_blocks=1 00:06:18.534 00:06:18.534 ' 00:06:18.534 19:00:52 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:18.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.535 --rc genhtml_branch_coverage=1 00:06:18.535 --rc genhtml_function_coverage=1 00:06:18.535 --rc genhtml_legend=1 00:06:18.535 --rc geninfo_all_blocks=1 00:06:18.535 --rc geninfo_unexecuted_blocks=1 00:06:18.535 00:06:18.535 ' 00:06:18.535 19:00:52 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:18.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.535 --rc genhtml_branch_coverage=1 00:06:18.535 --rc genhtml_function_coverage=1 00:06:18.535 --rc genhtml_legend=1 00:06:18.535 --rc geninfo_all_blocks=1 00:06:18.535 --rc geninfo_unexecuted_blocks=1 00:06:18.535 00:06:18.535 ' 00:06:18.535 19:00:52 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:18.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.535 --rc genhtml_branch_coverage=1 00:06:18.535 --rc genhtml_function_coverage=1 00:06:18.535 --rc genhtml_legend=1 00:06:18.535 --rc geninfo_all_blocks=1 00:06:18.535 --rc geninfo_unexecuted_blocks=1 00:06:18.535 00:06:18.535 ' 00:06:18.535 19:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:18.535 19:00:52 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:18.535 19:00:52 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.535 19:00:52 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.535 19:00:52 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.535 19:00:52 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.535 19:00:52 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.535 19:00:52 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.535 19:00:52 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:18.535 19:00:52 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:18.535 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:18.535 19:00:52 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:18.535 19:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:06:18.535 19:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:18.535 19:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:18.535 19:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:18.535 19:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:18.535 19:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:18.535 19:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:18.535 19:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:18.535 19:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:18.535 19:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:18.535 19:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:18.535 INFO: launching applications... 00:06:18.535 19:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:18.535 19:00:52 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:18.535 19:00:52 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:18.535 19:00:52 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:18.535 19:00:52 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:18.535 19:00:52 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:18.535 19:00:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:18.536 19:00:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:18.536 19:00:52 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=117889 00:06:18.536 19:00:52 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:18.536 Waiting for target to run... 00:06:18.536 19:00:52 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 117889 /var/tmp/spdk_tgt.sock 00:06:18.536 19:00:52 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 117889 ']' 00:06:18.536 19:00:52 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:18.536 19:00:52 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.536 19:00:52 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:18.536 19:00:52 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:18.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:18.536 19:00:52 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.536 19:00:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:18.536 [2024-12-13 19:00:52.764193] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:18.536 [2024-12-13 19:00:52.764248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117889 ] 00:06:19.109 [2024-12-13 19:00:53.219539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.109 [2024-12-13 19:00:53.241343] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.371 19:00:53 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.371 19:00:53 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:19.371 19:00:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:19.371 00:06:19.371 19:00:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:19.371 INFO: shutting down applications... 00:06:19.371 19:00:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:19.371 19:00:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:19.371 19:00:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:19.371 19:00:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 117889 ]] 00:06:19.371 19:00:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 117889 00:06:19.371 19:00:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:19.371 19:00:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:19.371 19:00:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 117889 00:06:19.371 19:00:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:19.943 19:00:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:19.943 19:00:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:19.943 19:00:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 117889 00:06:19.943 19:00:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:19.943 19:00:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:19.943 19:00:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:19.943 19:00:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:19.943 SPDK target shutdown done 00:06:19.943 19:00:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:19.943 Success 00:06:19.943 00:06:19.943 real 0m1.609s 00:06:19.943 user 0m1.184s 00:06:19.943 sys 0m0.613s 00:06:19.943 19:00:54 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.943 19:00:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:19.943 ************************************ 00:06:19.943 END TEST json_config_extra_key 00:06:19.943 ************************************ 00:06:19.943 19:00:54 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:19.943 19:00:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.943 19:00:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.943 19:00:54 -- common/autotest_common.sh@10 -- # set +x 00:06:19.943 ************************************ 00:06:19.943 START TEST alias_rpc 00:06:19.943 ************************************ 00:06:19.943 19:00:54 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:19.943 * Looking for test storage... 00:06:19.943 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:06:19.943 19:00:54 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:19.943 19:00:54 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:19.943 19:00:54 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:20.204 19:00:54 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.204 19:00:54 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:20.204 19:00:54 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.204 19:00:54 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:20.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.204 --rc genhtml_branch_coverage=1 00:06:20.204 --rc genhtml_function_coverage=1 00:06:20.204 --rc genhtml_legend=1 00:06:20.204 --rc geninfo_all_blocks=1 00:06:20.204 --rc geninfo_unexecuted_blocks=1 00:06:20.204 00:06:20.204 ' 00:06:20.204 19:00:54 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:20.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.204 --rc genhtml_branch_coverage=1 00:06:20.204 --rc genhtml_function_coverage=1 00:06:20.204 --rc genhtml_legend=1 00:06:20.204 --rc geninfo_all_blocks=1 00:06:20.204 --rc geninfo_unexecuted_blocks=1 00:06:20.204 00:06:20.204 ' 00:06:20.204 19:00:54 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:20.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.204 --rc genhtml_branch_coverage=1 00:06:20.204 --rc genhtml_function_coverage=1 00:06:20.204 --rc genhtml_legend=1 00:06:20.204 --rc geninfo_all_blocks=1 00:06:20.204 --rc geninfo_unexecuted_blocks=1 00:06:20.204 00:06:20.204 ' 00:06:20.204 19:00:54 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:20.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.204 --rc genhtml_branch_coverage=1 00:06:20.204 --rc genhtml_function_coverage=1 00:06:20.204 --rc genhtml_legend=1 00:06:20.204 --rc geninfo_all_blocks=1 00:06:20.204 --rc geninfo_unexecuted_blocks=1 00:06:20.204 00:06:20.204 ' 00:06:20.204 19:00:54 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:20.204 19:00:54 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=118209 00:06:20.204 19:00:54 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:20.204 19:00:54 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 118209 00:06:20.204 19:00:54 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 118209 ']' 00:06:20.204 19:00:54 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.204 19:00:54 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.204 19:00:54 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.204 19:00:54 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.204 19:00:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.204 [2024-12-13 19:00:54.448265] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:20.204 [2024-12-13 19:00:54.448315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118209 ] 00:06:20.204 [2024-12-13 19:00:54.539894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.204 [2024-12-13 19:00:54.561518] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.464 19:00:54 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.464 19:00:54 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:20.464 19:00:54 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:20.725 19:00:54 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 118209 00:06:20.725 19:00:54 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 118209 ']' 00:06:20.725 19:00:54 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 118209 00:06:20.725 19:00:55 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:20.725 19:00:55 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.725 19:00:55 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 118209 00:06:20.725 19:00:55 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.725 19:00:55 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.725 19:00:55 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 118209' 00:06:20.725 killing process with pid 118209 00:06:20.725 19:00:55 alias_rpc -- common/autotest_common.sh@973 -- # kill 118209 00:06:20.725 19:00:55 alias_rpc -- common/autotest_common.sh@978 -- # wait 118209 00:06:21.294 00:06:21.294 real 0m1.162s 00:06:21.294 user 0m1.135s 00:06:21.294 sys 0m0.487s 00:06:21.294 19:00:55 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.294 19:00:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.294 ************************************ 00:06:21.294 END TEST alias_rpc 00:06:21.294 ************************************ 00:06:21.294 19:00:55 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:21.294 19:00:55 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:21.294 19:00:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.294 19:00:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.294 19:00:55 -- common/autotest_common.sh@10 -- # set +x 00:06:21.294 ************************************ 00:06:21.294 START TEST spdkcli_tcp 00:06:21.294 ************************************ 00:06:21.294 19:00:55 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:21.294 * Looking for test storage... 00:06:21.294 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:06:21.294 19:00:55 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:21.294 19:00:55 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:21.294 19:00:55 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:21.294 19:00:55 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.294 19:00:55 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:21.294 19:00:55 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.294 19:00:55 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:21.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.294 --rc genhtml_branch_coverage=1 00:06:21.294 --rc genhtml_function_coverage=1 00:06:21.294 --rc genhtml_legend=1 00:06:21.294 --rc geninfo_all_blocks=1 00:06:21.294 --rc geninfo_unexecuted_blocks=1 00:06:21.294 00:06:21.294 ' 00:06:21.294 19:00:55 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:21.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.294 --rc genhtml_branch_coverage=1 00:06:21.294 --rc genhtml_function_coverage=1 00:06:21.294 --rc genhtml_legend=1 00:06:21.294 --rc geninfo_all_blocks=1 00:06:21.294 --rc geninfo_unexecuted_blocks=1 00:06:21.294 00:06:21.294 ' 00:06:21.294 19:00:55 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:21.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.294 --rc genhtml_branch_coverage=1 00:06:21.294 --rc genhtml_function_coverage=1 00:06:21.294 --rc genhtml_legend=1 00:06:21.294 --rc geninfo_all_blocks=1 00:06:21.295 --rc geninfo_unexecuted_blocks=1 00:06:21.295 00:06:21.295 ' 00:06:21.295 19:00:55 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:21.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.295 --rc genhtml_branch_coverage=1 00:06:21.295 --rc genhtml_function_coverage=1 00:06:21.295 --rc genhtml_legend=1 00:06:21.295 --rc geninfo_all_blocks=1 00:06:21.295 --rc geninfo_unexecuted_blocks=1 00:06:21.295 00:06:21.295 ' 00:06:21.295 19:00:55 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:06:21.295 19:00:55 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:21.295 19:00:55 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:06:21.295 19:00:55 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:21.295 19:00:55 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:21.295 19:00:55 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:21.295 19:00:55 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:21.295 19:00:55 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:21.295 19:00:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:21.295 19:00:55 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=118538 00:06:21.295 19:00:55 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 118538 00:06:21.295 19:00:55 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:21.295 19:00:55 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 118538 ']' 00:06:21.295 19:00:55 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.295 19:00:55 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.295 19:00:55 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.295 19:00:55 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.295 19:00:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:21.789 [2024-12-13 19:00:55.702224] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:21.789 [2024-12-13 19:00:55.702276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118538 ] 00:06:21.789 [2024-12-13 19:00:55.795294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.789 [2024-12-13 19:00:55.819289] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.789 [2024-12-13 19:00:55.819290] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.789 19:00:56 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.789 19:00:56 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:21.789 19:00:56 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=118542 00:06:21.789 19:00:56 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:21.789 19:00:56 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:22.050 [ 00:06:22.050 "bdev_malloc_delete", 00:06:22.050 "bdev_malloc_create", 00:06:22.050 "bdev_null_resize", 00:06:22.050 "bdev_null_delete", 00:06:22.050 "bdev_null_create", 00:06:22.050 "bdev_nvme_cuse_unregister", 00:06:22.050 "bdev_nvme_cuse_register", 00:06:22.050 "bdev_opal_new_user", 00:06:22.050 "bdev_opal_set_lock_state", 00:06:22.050 "bdev_opal_delete", 00:06:22.050 "bdev_opal_get_info", 00:06:22.050 "bdev_opal_create", 00:06:22.050 "bdev_nvme_opal_revert", 00:06:22.050 "bdev_nvme_opal_init", 00:06:22.050 "bdev_nvme_send_cmd", 00:06:22.050 "bdev_nvme_set_keys", 00:06:22.050 "bdev_nvme_get_path_iostat", 00:06:22.050 "bdev_nvme_get_mdns_discovery_info", 00:06:22.050 "bdev_nvme_stop_mdns_discovery", 00:06:22.050 "bdev_nvme_start_mdns_discovery", 00:06:22.050 "bdev_nvme_set_multipath_policy", 00:06:22.051 "bdev_nvme_set_preferred_path", 00:06:22.051 "bdev_nvme_get_io_paths", 00:06:22.051 "bdev_nvme_remove_error_injection", 00:06:22.051 "bdev_nvme_add_error_injection", 00:06:22.051 "bdev_nvme_get_discovery_info", 00:06:22.051 "bdev_nvme_stop_discovery", 00:06:22.051 "bdev_nvme_start_discovery", 00:06:22.051 "bdev_nvme_get_controller_health_info", 00:06:22.051 "bdev_nvme_disable_controller", 00:06:22.051 "bdev_nvme_enable_controller", 00:06:22.051 "bdev_nvme_reset_controller", 00:06:22.051 "bdev_nvme_get_transport_statistics", 00:06:22.051 "bdev_nvme_apply_firmware", 00:06:22.051 "bdev_nvme_detach_controller", 00:06:22.051 "bdev_nvme_get_controllers", 00:06:22.051 "bdev_nvme_attach_controller", 00:06:22.051 "bdev_nvme_set_hotplug", 00:06:22.051 "bdev_nvme_set_options", 00:06:22.051 "bdev_passthru_delete", 00:06:22.051 "bdev_passthru_create", 00:06:22.051 "bdev_lvol_set_parent_bdev", 00:06:22.051 "bdev_lvol_set_parent", 00:06:22.051 "bdev_lvol_check_shallow_copy", 00:06:22.051 "bdev_lvol_start_shallow_copy", 00:06:22.051 "bdev_lvol_grow_lvstore", 00:06:22.051 "bdev_lvol_get_lvols", 00:06:22.051 "bdev_lvol_get_lvstores", 00:06:22.051 "bdev_lvol_delete", 00:06:22.051 "bdev_lvol_set_read_only", 00:06:22.051 "bdev_lvol_resize", 00:06:22.051 "bdev_lvol_decouple_parent", 00:06:22.051 "bdev_lvol_inflate", 00:06:22.051 "bdev_lvol_rename", 00:06:22.051 "bdev_lvol_clone_bdev", 00:06:22.051 "bdev_lvol_clone", 00:06:22.051 "bdev_lvol_snapshot", 00:06:22.051 "bdev_lvol_create", 00:06:22.051 "bdev_lvol_delete_lvstore", 00:06:22.051 "bdev_lvol_rename_lvstore", 00:06:22.051 "bdev_lvol_create_lvstore", 00:06:22.051 "bdev_raid_set_options", 00:06:22.051 "bdev_raid_remove_base_bdev", 00:06:22.051 "bdev_raid_add_base_bdev", 00:06:22.051 "bdev_raid_delete", 00:06:22.051 "bdev_raid_create", 00:06:22.051 "bdev_raid_get_bdevs", 00:06:22.051 "bdev_error_inject_error", 00:06:22.051 "bdev_error_delete", 00:06:22.051 "bdev_error_create", 00:06:22.051 "bdev_split_delete", 00:06:22.051 "bdev_split_create", 00:06:22.051 "bdev_delay_delete", 00:06:22.051 "bdev_delay_create", 00:06:22.051 "bdev_delay_update_latency", 00:06:22.051 "bdev_zone_block_delete", 00:06:22.051 "bdev_zone_block_create", 00:06:22.051 "blobfs_create", 00:06:22.051 "blobfs_detect", 00:06:22.051 "blobfs_set_cache_size", 00:06:22.051 "bdev_aio_delete", 00:06:22.051 "bdev_aio_rescan", 00:06:22.051 "bdev_aio_create", 00:06:22.051 "bdev_ftl_set_property", 00:06:22.051 "bdev_ftl_get_properties", 00:06:22.051 "bdev_ftl_get_stats", 00:06:22.051 "bdev_ftl_unmap", 00:06:22.051 "bdev_ftl_unload", 00:06:22.051 "bdev_ftl_delete", 00:06:22.051 "bdev_ftl_load", 00:06:22.051 "bdev_ftl_create", 00:06:22.051 "bdev_virtio_attach_controller", 00:06:22.051 "bdev_virtio_scsi_get_devices", 00:06:22.051 "bdev_virtio_detach_controller", 00:06:22.051 "bdev_virtio_blk_set_hotplug", 00:06:22.051 "bdev_iscsi_delete", 00:06:22.051 "bdev_iscsi_create", 00:06:22.051 "bdev_iscsi_set_options", 00:06:22.051 "accel_error_inject_error", 00:06:22.051 "ioat_scan_accel_module", 00:06:22.051 "dsa_scan_accel_module", 00:06:22.051 "iaa_scan_accel_module", 00:06:22.051 "keyring_file_remove_key", 00:06:22.051 "keyring_file_add_key", 00:06:22.051 "keyring_linux_set_options", 00:06:22.051 "fsdev_aio_delete", 00:06:22.051 "fsdev_aio_create", 00:06:22.051 "iscsi_get_histogram", 00:06:22.051 "iscsi_enable_histogram", 00:06:22.051 "iscsi_set_options", 00:06:22.051 "iscsi_get_auth_groups", 00:06:22.051 "iscsi_auth_group_remove_secret", 00:06:22.051 "iscsi_auth_group_add_secret", 00:06:22.051 "iscsi_delete_auth_group", 00:06:22.051 "iscsi_create_auth_group", 00:06:22.051 "iscsi_set_discovery_auth", 00:06:22.051 "iscsi_get_options", 00:06:22.051 "iscsi_target_node_request_logout", 00:06:22.051 "iscsi_target_node_set_redirect", 00:06:22.051 "iscsi_target_node_set_auth", 00:06:22.051 "iscsi_target_node_add_lun", 00:06:22.051 "iscsi_get_stats", 00:06:22.051 "iscsi_get_connections", 00:06:22.051 "iscsi_portal_group_set_auth", 00:06:22.051 "iscsi_start_portal_group", 00:06:22.051 "iscsi_delete_portal_group", 00:06:22.051 "iscsi_create_portal_group", 00:06:22.051 "iscsi_get_portal_groups", 00:06:22.051 "iscsi_delete_target_node", 00:06:22.051 "iscsi_target_node_remove_pg_ig_maps", 00:06:22.051 "iscsi_target_node_add_pg_ig_maps", 00:06:22.051 "iscsi_create_target_node", 00:06:22.051 "iscsi_get_target_nodes", 00:06:22.051 "iscsi_delete_initiator_group", 00:06:22.051 "iscsi_initiator_group_remove_initiators", 00:06:22.051 "iscsi_initiator_group_add_initiators", 00:06:22.051 "iscsi_create_initiator_group", 00:06:22.051 "iscsi_get_initiator_groups", 00:06:22.051 "nvmf_set_crdt", 00:06:22.051 "nvmf_set_config", 00:06:22.051 "nvmf_set_max_subsystems", 00:06:22.051 "nvmf_stop_mdns_prr", 00:06:22.051 "nvmf_publish_mdns_prr", 00:06:22.051 "nvmf_subsystem_get_listeners", 00:06:22.051 "nvmf_subsystem_get_qpairs", 00:06:22.051 "nvmf_subsystem_get_controllers", 00:06:22.051 "nvmf_get_stats", 00:06:22.051 "nvmf_get_transports", 00:06:22.051 "nvmf_create_transport", 00:06:22.051 "nvmf_get_targets", 00:06:22.051 "nvmf_delete_target", 00:06:22.051 "nvmf_create_target", 00:06:22.051 "nvmf_subsystem_allow_any_host", 00:06:22.051 "nvmf_subsystem_set_keys", 00:06:22.051 "nvmf_subsystem_remove_host", 00:06:22.051 "nvmf_subsystem_add_host", 00:06:22.051 "nvmf_ns_remove_host", 00:06:22.051 "nvmf_ns_add_host", 00:06:22.051 "nvmf_subsystem_remove_ns", 00:06:22.051 "nvmf_subsystem_set_ns_ana_group", 00:06:22.051 "nvmf_subsystem_add_ns", 00:06:22.051 "nvmf_subsystem_listener_set_ana_state", 00:06:22.051 "nvmf_discovery_get_referrals", 00:06:22.051 "nvmf_discovery_remove_referral", 00:06:22.051 "nvmf_discovery_add_referral", 00:06:22.051 "nvmf_subsystem_remove_listener", 00:06:22.051 "nvmf_subsystem_add_listener", 00:06:22.051 "nvmf_delete_subsystem", 00:06:22.051 "nvmf_create_subsystem", 00:06:22.051 "nvmf_get_subsystems", 00:06:22.051 "env_dpdk_get_mem_stats", 00:06:22.051 "nbd_get_disks", 00:06:22.051 "nbd_stop_disk", 00:06:22.051 "nbd_start_disk", 00:06:22.051 "ublk_recover_disk", 00:06:22.051 "ublk_get_disks", 00:06:22.051 "ublk_stop_disk", 00:06:22.051 "ublk_start_disk", 00:06:22.051 "ublk_destroy_target", 00:06:22.051 "ublk_create_target", 00:06:22.051 "virtio_blk_create_transport", 00:06:22.051 "virtio_blk_get_transports", 00:06:22.051 "vhost_controller_set_coalescing", 00:06:22.051 "vhost_get_controllers", 00:06:22.051 "vhost_delete_controller", 00:06:22.051 "vhost_create_blk_controller", 00:06:22.051 "vhost_scsi_controller_remove_target", 00:06:22.051 "vhost_scsi_controller_add_target", 00:06:22.051 "vhost_start_scsi_controller", 00:06:22.051 "vhost_create_scsi_controller", 00:06:22.051 "thread_set_cpumask", 00:06:22.051 "scheduler_set_options", 00:06:22.051 "framework_get_governor", 00:06:22.051 "framework_get_scheduler", 00:06:22.051 "framework_set_scheduler", 00:06:22.051 "framework_get_reactors", 00:06:22.051 "thread_get_io_channels", 00:06:22.051 "thread_get_pollers", 00:06:22.051 "thread_get_stats", 00:06:22.051 "framework_monitor_context_switch", 00:06:22.051 "spdk_kill_instance", 00:06:22.051 "log_enable_timestamps", 00:06:22.051 "log_get_flags", 00:06:22.051 "log_clear_flag", 00:06:22.051 "log_set_flag", 00:06:22.051 "log_get_level", 00:06:22.051 "log_set_level", 00:06:22.051 "log_get_print_level", 00:06:22.051 "log_set_print_level", 00:06:22.051 "framework_enable_cpumask_locks", 00:06:22.051 "framework_disable_cpumask_locks", 00:06:22.051 "framework_wait_init", 00:06:22.051 "framework_start_init", 00:06:22.051 "scsi_get_devices", 00:06:22.051 "bdev_get_histogram", 00:06:22.051 "bdev_enable_histogram", 00:06:22.052 "bdev_set_qos_limit", 00:06:22.052 "bdev_set_qd_sampling_period", 00:06:22.052 "bdev_get_bdevs", 00:06:22.052 "bdev_reset_iostat", 00:06:22.052 "bdev_get_iostat", 00:06:22.052 "bdev_examine", 00:06:22.052 "bdev_wait_for_examine", 00:06:22.052 "bdev_set_options", 00:06:22.052 "accel_get_stats", 00:06:22.052 "accel_set_options", 00:06:22.052 "accel_set_driver", 00:06:22.052 "accel_crypto_key_destroy", 00:06:22.052 "accel_crypto_keys_get", 00:06:22.052 "accel_crypto_key_create", 00:06:22.052 "accel_assign_opc", 00:06:22.052 "accel_get_module_info", 00:06:22.052 "accel_get_opc_assignments", 00:06:22.052 "vmd_rescan", 00:06:22.052 "vmd_remove_device", 00:06:22.052 "vmd_enable", 00:06:22.052 "sock_get_default_impl", 00:06:22.052 "sock_set_default_impl", 00:06:22.052 "sock_impl_set_options", 00:06:22.052 "sock_impl_get_options", 00:06:22.052 "iobuf_get_stats", 00:06:22.052 "iobuf_set_options", 00:06:22.052 "keyring_get_keys", 00:06:22.052 "framework_get_pci_devices", 00:06:22.052 "framework_get_config", 00:06:22.052 "framework_get_subsystems", 00:06:22.052 "fsdev_set_opts", 00:06:22.052 "fsdev_get_opts", 00:06:22.052 "trace_get_info", 00:06:22.052 "trace_get_tpoint_group_mask", 00:06:22.052 "trace_disable_tpoint_group", 00:06:22.052 "trace_enable_tpoint_group", 00:06:22.052 "trace_clear_tpoint_mask", 00:06:22.052 "trace_set_tpoint_mask", 00:06:22.052 "notify_get_notifications", 00:06:22.052 "notify_get_types", 00:06:22.052 "spdk_get_version", 00:06:22.052 "rpc_get_methods" 00:06:22.052 ] 00:06:22.052 19:00:56 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:22.052 19:00:56 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:22.052 19:00:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:22.052 19:00:56 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:22.052 19:00:56 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 118538 00:06:22.052 19:00:56 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 118538 ']' 00:06:22.052 19:00:56 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 118538 00:06:22.052 19:00:56 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:22.052 19:00:56 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.052 19:00:56 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 118538 00:06:22.052 19:00:56 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.052 19:00:56 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.052 19:00:56 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 118538' 00:06:22.052 killing process with pid 118538 00:06:22.052 19:00:56 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 118538 00:06:22.052 19:00:56 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 118538 00:06:22.312 00:06:22.312 real 0m1.162s 00:06:22.312 user 0m1.877s 00:06:22.312 sys 0m0.531s 00:06:22.312 19:00:56 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.312 19:00:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:22.312 ************************************ 00:06:22.313 END TEST spdkcli_tcp 00:06:22.313 ************************************ 00:06:22.313 19:00:56 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:22.313 19:00:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.313 19:00:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.313 19:00:56 -- common/autotest_common.sh@10 -- # set +x 00:06:22.573 ************************************ 00:06:22.573 START TEST dpdk_mem_utility 00:06:22.573 ************************************ 00:06:22.573 19:00:56 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:22.573 * Looking for test storage... 00:06:22.573 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:06:22.573 19:00:56 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:22.573 19:00:56 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:22.573 19:00:56 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:22.573 19:00:56 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:22.573 19:00:56 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.573 19:00:56 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.573 19:00:56 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.573 19:00:56 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.573 19:00:56 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.573 19:00:56 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.573 19:00:56 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.573 19:00:56 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.573 19:00:56 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.573 19:00:56 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.573 19:00:56 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.573 19:00:56 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:22.573 19:00:56 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:22.573 19:00:56 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.573 19:00:56 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.573 19:00:56 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:22.574 19:00:56 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:22.574 19:00:56 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.574 19:00:56 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:22.574 19:00:56 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.574 19:00:56 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:22.574 19:00:56 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:22.574 19:00:56 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.574 19:00:56 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:22.574 19:00:56 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.574 19:00:56 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.574 19:00:56 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.574 19:00:56 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:22.574 19:00:56 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.574 19:00:56 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:22.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.574 --rc genhtml_branch_coverage=1 00:06:22.574 --rc genhtml_function_coverage=1 00:06:22.574 --rc genhtml_legend=1 00:06:22.574 --rc geninfo_all_blocks=1 00:06:22.574 --rc geninfo_unexecuted_blocks=1 00:06:22.574 00:06:22.574 ' 00:06:22.574 19:00:56 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:22.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.574 --rc genhtml_branch_coverage=1 00:06:22.574 --rc genhtml_function_coverage=1 00:06:22.574 --rc genhtml_legend=1 00:06:22.574 --rc geninfo_all_blocks=1 00:06:22.574 --rc geninfo_unexecuted_blocks=1 00:06:22.574 00:06:22.574 ' 00:06:22.574 19:00:56 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:22.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.574 --rc genhtml_branch_coverage=1 00:06:22.574 --rc genhtml_function_coverage=1 00:06:22.574 --rc genhtml_legend=1 00:06:22.574 --rc geninfo_all_blocks=1 00:06:22.574 --rc geninfo_unexecuted_blocks=1 00:06:22.574 00:06:22.574 ' 00:06:22.574 19:00:56 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:22.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.574 --rc genhtml_branch_coverage=1 00:06:22.574 --rc genhtml_function_coverage=1 00:06:22.574 --rc genhtml_legend=1 00:06:22.574 --rc geninfo_all_blocks=1 00:06:22.574 --rc geninfo_unexecuted_blocks=1 00:06:22.574 00:06:22.574 ' 00:06:22.574 19:00:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:22.574 19:00:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=118847 00:06:22.574 19:00:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.574 19:00:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 118847 00:06:22.574 19:00:56 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 118847 ']' 00:06:22.574 19:00:56 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.574 19:00:56 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.574 19:00:56 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.574 19:00:56 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.574 19:00:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:22.574 [2024-12-13 19:00:56.945698] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:22.574 [2024-12-13 19:00:56.945761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118847 ] 00:06:22.834 [2024-12-13 19:00:57.033314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.834 [2024-12-13 19:00:57.054871] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.095 19:00:57 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.095 19:00:57 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:23.095 19:00:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:23.095 19:00:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:23.095 19:00:57 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.095 19:00:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:23.095 { 00:06:23.095 "filename": "/tmp/spdk_mem_dump.txt" 00:06:23.095 } 00:06:23.095 19:00:57 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.095 19:00:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:23.096 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:23.096 1 heaps totaling size 818.000000 MiB 00:06:23.096 size: 818.000000 MiB heap id: 0 00:06:23.096 end heaps---------- 00:06:23.096 9 mempools totaling size 603.782043 MiB 00:06:23.096 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:23.096 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:23.096 size: 100.555481 MiB name: bdev_io_118847 00:06:23.096 size: 50.003479 MiB name: msgpool_118847 00:06:23.096 size: 36.509338 MiB name: fsdev_io_118847 00:06:23.096 size: 21.763794 MiB name: PDU_Pool 00:06:23.096 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:23.096 size: 4.133484 MiB name: evtpool_118847 00:06:23.096 size: 0.026123 MiB name: Session_Pool 00:06:23.096 end mempools------- 00:06:23.096 6 memzones totaling size 4.142822 MiB 00:06:23.096 size: 1.000366 MiB name: RG_ring_0_118847 00:06:23.096 size: 1.000366 MiB name: RG_ring_1_118847 00:06:23.096 size: 1.000366 MiB name: RG_ring_4_118847 00:06:23.096 size: 1.000366 MiB name: RG_ring_5_118847 00:06:23.096 size: 0.125366 MiB name: RG_ring_2_118847 00:06:23.096 size: 0.015991 MiB name: RG_ring_3_118847 00:06:23.096 end memzones------- 00:06:23.096 19:00:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:23.096 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:23.096 list of free elements. size: 10.852478 MiB 00:06:23.096 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:23.096 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:23.096 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:23.096 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:23.096 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:23.096 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:23.096 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:23.096 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:23.096 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:06:23.096 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:23.096 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:23.096 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:23.096 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:23.096 element at address: 0x200028200000 with size: 0.410034 MiB 00:06:23.096 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:23.096 list of standard malloc elements. size: 199.218628 MiB 00:06:23.096 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:23.096 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:23.096 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:23.096 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:23.096 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:23.096 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:23.096 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:23.096 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:23.096 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:23.096 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:23.096 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:23.096 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:23.096 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:23.096 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:23.096 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:23.096 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:23.096 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:23.096 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:23.096 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:23.096 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:23.096 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:23.096 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:23.096 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:23.096 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:23.096 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:23.096 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:23.096 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:23.096 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:23.096 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:23.096 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:23.096 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:23.096 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:23.096 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:23.096 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:23.096 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:23.096 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:23.096 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:23.096 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:23.096 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:23.096 element at address: 0x200028268f80 with size: 0.000183 MiB 00:06:23.096 element at address: 0x200028269040 with size: 0.000183 MiB 00:06:23.096 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:06:23.096 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:23.096 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:23.096 list of memzone associated elements. size: 607.928894 MiB 00:06:23.096 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:23.096 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:23.096 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:23.096 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:23.096 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:23.096 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_118847_0 00:06:23.096 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:23.096 associated memzone info: size: 48.002930 MiB name: MP_msgpool_118847_0 00:06:23.096 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:23.096 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_118847_0 00:06:23.096 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:23.096 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:23.096 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:23.096 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:23.096 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:23.096 associated memzone info: size: 3.000122 MiB name: MP_evtpool_118847_0 00:06:23.096 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:23.096 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_118847 00:06:23.096 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:23.096 associated memzone info: size: 1.007996 MiB name: MP_evtpool_118847 00:06:23.096 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:23.096 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:23.096 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:23.096 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:23.096 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:23.096 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:23.096 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:23.096 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:23.096 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:23.096 associated memzone info: size: 1.000366 MiB name: RG_ring_0_118847 00:06:23.096 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:23.096 associated memzone info: size: 1.000366 MiB name: RG_ring_1_118847 00:06:23.096 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:23.096 associated memzone info: size: 1.000366 MiB name: RG_ring_4_118847 00:06:23.096 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:23.096 associated memzone info: size: 1.000366 MiB name: RG_ring_5_118847 00:06:23.096 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:23.096 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_118847 00:06:23.096 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:23.096 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_118847 00:06:23.096 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:23.096 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:23.096 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:23.096 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:23.096 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:23.096 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:23.096 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:23.096 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_118847 00:06:23.097 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:23.097 associated memzone info: size: 0.125366 MiB name: RG_ring_2_118847 00:06:23.097 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:23.097 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:23.097 element at address: 0x200028269100 with size: 0.023743 MiB 00:06:23.097 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:23.097 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:23.097 associated memzone info: size: 0.015991 MiB name: RG_ring_3_118847 00:06:23.097 element at address: 0x20002826f240 with size: 0.002441 MiB 00:06:23.097 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:23.097 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:23.097 associated memzone info: size: 0.000183 MiB name: MP_msgpool_118847 00:06:23.097 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:23.097 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_118847 00:06:23.097 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:23.097 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_118847 00:06:23.097 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:06:23.097 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:23.097 19:00:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:23.097 19:00:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 118847 00:06:23.097 19:00:57 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 118847 ']' 00:06:23.097 19:00:57 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 118847 00:06:23.097 19:00:57 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:23.097 19:00:57 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.097 19:00:57 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 118847 00:06:23.097 19:00:57 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.097 19:00:57 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.097 19:00:57 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 118847' 00:06:23.097 killing process with pid 118847 00:06:23.097 19:00:57 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 118847 00:06:23.097 19:00:57 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 118847 00:06:23.357 00:06:23.357 real 0m1.025s 00:06:23.357 user 0m0.927s 00:06:23.357 sys 0m0.457s 00:06:23.357 19:00:57 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.357 19:00:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:23.357 ************************************ 00:06:23.357 END TEST dpdk_mem_utility 00:06:23.357 ************************************ 00:06:23.618 19:00:57 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:23.618 19:00:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.618 19:00:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.618 19:00:57 -- common/autotest_common.sh@10 -- # set +x 00:06:23.618 ************************************ 00:06:23.618 START TEST event 00:06:23.618 ************************************ 00:06:23.618 19:00:57 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:23.618 * Looking for test storage... 00:06:23.618 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:23.618 19:00:57 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:23.618 19:00:57 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:23.618 19:00:57 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:23.618 19:00:57 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:23.618 19:00:57 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.618 19:00:57 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.618 19:00:57 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.618 19:00:57 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.618 19:00:57 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.618 19:00:57 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.618 19:00:57 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.618 19:00:57 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.618 19:00:57 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.618 19:00:57 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.618 19:00:57 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.618 19:00:57 event -- scripts/common.sh@344 -- # case "$op" in 00:06:23.618 19:00:57 event -- scripts/common.sh@345 -- # : 1 00:06:23.618 19:00:57 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.618 19:00:57 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.618 19:00:57 event -- scripts/common.sh@365 -- # decimal 1 00:06:23.618 19:00:57 event -- scripts/common.sh@353 -- # local d=1 00:06:23.618 19:00:57 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.618 19:00:57 event -- scripts/common.sh@355 -- # echo 1 00:06:23.618 19:00:57 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.618 19:00:57 event -- scripts/common.sh@366 -- # decimal 2 00:06:23.618 19:00:57 event -- scripts/common.sh@353 -- # local d=2 00:06:23.618 19:00:57 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.618 19:00:57 event -- scripts/common.sh@355 -- # echo 2 00:06:23.618 19:00:57 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.618 19:00:57 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.618 19:00:57 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.618 19:00:57 event -- scripts/common.sh@368 -- # return 0 00:06:23.618 19:00:57 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.618 19:00:57 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:23.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.618 --rc genhtml_branch_coverage=1 00:06:23.618 --rc genhtml_function_coverage=1 00:06:23.618 --rc genhtml_legend=1 00:06:23.618 --rc geninfo_all_blocks=1 00:06:23.618 --rc geninfo_unexecuted_blocks=1 00:06:23.618 00:06:23.618 ' 00:06:23.618 19:00:57 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:23.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.618 --rc genhtml_branch_coverage=1 00:06:23.618 --rc genhtml_function_coverage=1 00:06:23.618 --rc genhtml_legend=1 00:06:23.618 --rc geninfo_all_blocks=1 00:06:23.618 --rc geninfo_unexecuted_blocks=1 00:06:23.618 00:06:23.618 ' 00:06:23.618 19:00:57 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:23.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.618 --rc genhtml_branch_coverage=1 00:06:23.618 --rc genhtml_function_coverage=1 00:06:23.618 --rc genhtml_legend=1 00:06:23.618 --rc geninfo_all_blocks=1 00:06:23.618 --rc geninfo_unexecuted_blocks=1 00:06:23.618 00:06:23.618 ' 00:06:23.618 19:00:57 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:23.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.618 --rc genhtml_branch_coverage=1 00:06:23.618 --rc genhtml_function_coverage=1 00:06:23.618 --rc genhtml_legend=1 00:06:23.618 --rc geninfo_all_blocks=1 00:06:23.618 --rc geninfo_unexecuted_blocks=1 00:06:23.618 00:06:23.618 ' 00:06:23.618 19:00:57 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:23.618 19:00:57 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:23.618 19:00:57 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:23.618 19:00:57 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:23.618 19:00:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.618 19:00:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.879 ************************************ 00:06:23.879 START TEST event_perf 00:06:23.879 ************************************ 00:06:23.879 19:00:58 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:23.879 Running I/O for 1 seconds...[2024-12-13 19:00:58.055326] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:23.879 [2024-12-13 19:00:58.055404] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118975 ] 00:06:23.879 [2024-12-13 19:00:58.150357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.879 [2024-12-13 19:00:58.176380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.879 [2024-12-13 19:00:58.176488] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.879 [2024-12-13 19:00:58.176600] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.879 [2024-12-13 19:00:58.176601] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:25.261 Running I/O for 1 seconds... 00:06:25.261 lcore 0: 203078 00:06:25.261 lcore 1: 203078 00:06:25.261 lcore 2: 203078 00:06:25.261 lcore 3: 203078 00:06:25.261 done. 00:06:25.261 00:06:25.261 real 0m1.178s 00:06:25.261 user 0m4.078s 00:06:25.261 sys 0m0.097s 00:06:25.261 19:00:59 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.261 19:00:59 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:25.261 ************************************ 00:06:25.261 END TEST event_perf 00:06:25.261 ************************************ 00:06:25.261 19:00:59 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:25.261 19:00:59 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:25.261 19:00:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.261 19:00:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.261 ************************************ 00:06:25.261 START TEST event_reactor 00:06:25.261 ************************************ 00:06:25.262 19:00:59 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:25.262 [2024-12-13 19:00:59.312401] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:25.262 [2024-12-13 19:00:59.312484] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119241 ] 00:06:25.262 [2024-12-13 19:00:59.408302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.262 [2024-12-13 19:00:59.431963] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.201 test_start 00:06:26.201 oneshot 00:06:26.201 tick 100 00:06:26.201 tick 100 00:06:26.201 tick 250 00:06:26.201 tick 100 00:06:26.201 tick 100 00:06:26.201 tick 100 00:06:26.201 tick 250 00:06:26.202 tick 500 00:06:26.202 tick 100 00:06:26.202 tick 100 00:06:26.202 tick 250 00:06:26.202 tick 100 00:06:26.202 tick 100 00:06:26.202 test_end 00:06:26.202 00:06:26.202 real 0m1.178s 00:06:26.202 user 0m1.089s 00:06:26.202 sys 0m0.085s 00:06:26.202 19:01:00 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.202 19:01:00 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:26.202 ************************************ 00:06:26.202 END TEST event_reactor 00:06:26.202 ************************************ 00:06:26.202 19:01:00 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:26.202 19:01:00 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:26.202 19:01:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.202 19:01:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.202 ************************************ 00:06:26.202 START TEST event_reactor_perf 00:06:26.202 ************************************ 00:06:26.202 19:01:00 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:26.202 [2024-12-13 19:01:00.577258] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:26.202 [2024-12-13 19:01:00.577347] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119523 ] 00:06:26.465 [2024-12-13 19:01:00.671545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.465 [2024-12-13 19:01:00.693758] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.408 test_start 00:06:27.408 test_end 00:06:27.408 Performance: 521422 events per second 00:06:27.408 00:06:27.408 real 0m1.170s 00:06:27.408 user 0m1.080s 00:06:27.408 sys 0m0.085s 00:06:27.408 19:01:01 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.408 19:01:01 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.408 ************************************ 00:06:27.408 END TEST event_reactor_perf 00:06:27.408 ************************************ 00:06:27.408 19:01:01 event -- event/event.sh@49 -- # uname -s 00:06:27.408 19:01:01 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:27.408 19:01:01 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:27.408 19:01:01 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.408 19:01:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.408 19:01:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.668 ************************************ 00:06:27.668 START TEST event_scheduler 00:06:27.668 ************************************ 00:06:27.668 19:01:01 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:27.668 * Looking for test storage... 00:06:27.668 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:27.668 19:01:01 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:27.668 19:01:01 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:27.668 19:01:01 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:27.668 19:01:01 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:27.668 19:01:01 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.668 19:01:01 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.668 19:01:01 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.668 19:01:01 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.668 19:01:02 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.668 19:01:02 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.668 19:01:02 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.668 19:01:02 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.668 19:01:02 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.668 19:01:02 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.668 19:01:02 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.668 19:01:02 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:27.668 19:01:02 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:27.668 19:01:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.668 19:01:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.668 19:01:02 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:27.668 19:01:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:27.668 19:01:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.668 19:01:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:27.668 19:01:02 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.668 19:01:02 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:27.668 19:01:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:27.668 19:01:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.668 19:01:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:27.668 19:01:02 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.668 19:01:02 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.668 19:01:02 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.668 19:01:02 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:27.668 19:01:02 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.668 19:01:02 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:27.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.668 --rc genhtml_branch_coverage=1 00:06:27.668 --rc genhtml_function_coverage=1 00:06:27.668 --rc genhtml_legend=1 00:06:27.668 --rc geninfo_all_blocks=1 00:06:27.668 --rc geninfo_unexecuted_blocks=1 00:06:27.668 00:06:27.668 ' 00:06:27.668 19:01:02 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:27.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.668 --rc genhtml_branch_coverage=1 00:06:27.668 --rc genhtml_function_coverage=1 00:06:27.668 --rc genhtml_legend=1 00:06:27.668 --rc geninfo_all_blocks=1 00:06:27.668 --rc geninfo_unexecuted_blocks=1 00:06:27.668 00:06:27.668 ' 00:06:27.668 19:01:02 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:27.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.668 --rc genhtml_branch_coverage=1 00:06:27.668 --rc genhtml_function_coverage=1 00:06:27.668 --rc genhtml_legend=1 00:06:27.668 --rc geninfo_all_blocks=1 00:06:27.668 --rc geninfo_unexecuted_blocks=1 00:06:27.668 00:06:27.668 ' 00:06:27.668 19:01:02 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:27.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.668 --rc genhtml_branch_coverage=1 00:06:27.668 --rc genhtml_function_coverage=1 00:06:27.668 --rc genhtml_legend=1 00:06:27.668 --rc geninfo_all_blocks=1 00:06:27.668 --rc geninfo_unexecuted_blocks=1 00:06:27.668 00:06:27.668 ' 00:06:27.668 19:01:02 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:27.668 19:01:02 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=119852 00:06:27.668 19:01:02 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:27.668 19:01:02 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:27.668 19:01:02 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 119852 00:06:27.668 19:01:02 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 119852 ']' 00:06:27.668 19:01:02 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.668 19:01:02 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.668 19:01:02 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.668 19:01:02 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.668 19:01:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.929 [2024-12-13 19:01:02.062266] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:27.929 [2024-12-13 19:01:02.062320] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119852 ] 00:06:27.929 [2024-12-13 19:01:02.151172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.929 [2024-12-13 19:01:02.177093] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.929 [2024-12-13 19:01:02.177202] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.929 [2024-12-13 19:01:02.177235] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.929 [2024-12-13 19:01:02.177236] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.929 19:01:02 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.929 19:01:02 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:27.929 19:01:02 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:27.929 19:01:02 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.929 19:01:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.929 [2024-12-13 19:01:02.205902] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:27.929 [2024-12-13 19:01:02.205921] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:27.929 [2024-12-13 19:01:02.205931] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:27.929 [2024-12-13 19:01:02.205939] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:27.929 [2024-12-13 19:01:02.205945] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:27.929 19:01:02 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.929 19:01:02 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:27.929 19:01:02 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.929 19:01:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.929 [2024-12-13 19:01:02.276428] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:27.929 19:01:02 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.929 19:01:02 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:27.929 19:01:02 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.929 19:01:02 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.929 19:01:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.190 ************************************ 00:06:28.190 START TEST scheduler_create_thread 00:06:28.190 ************************************ 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.190 2 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.190 3 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.190 4 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.190 5 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.190 6 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.190 7 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.190 8 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.190 9 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.190 10 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.190 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.760 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.760 19:01:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:28.760 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.760 19:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.145 19:01:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.145 19:01:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:30.145 19:01:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:30.145 19:01:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.145 19:01:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.087 19:01:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.087 00:06:31.087 real 0m3.101s 00:06:31.087 user 0m0.023s 00:06:31.087 sys 0m0.009s 00:06:31.087 19:01:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.087 19:01:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.087 ************************************ 00:06:31.087 END TEST scheduler_create_thread 00:06:31.087 ************************************ 00:06:31.087 19:01:05 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:31.087 19:01:05 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 119852 00:06:31.087 19:01:05 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 119852 ']' 00:06:31.087 19:01:05 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 119852 00:06:31.087 19:01:05 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:31.347 19:01:05 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.347 19:01:05 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 119852 00:06:31.347 19:01:05 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:31.347 19:01:05 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:31.347 19:01:05 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 119852' 00:06:31.347 killing process with pid 119852 00:06:31.347 19:01:05 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 119852 00:06:31.347 19:01:05 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 119852 00:06:31.608 [2024-12-13 19:01:05.795485] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:31.608 00:06:31.608 real 0m4.157s 00:06:31.608 user 0m6.563s 00:06:31.608 sys 0m0.440s 00:06:31.608 19:01:05 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.608 19:01:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:31.608 ************************************ 00:06:31.608 END TEST event_scheduler 00:06:31.608 ************************************ 00:06:31.869 19:01:06 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:31.869 19:01:06 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:31.869 19:01:06 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.869 19:01:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.869 19:01:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.869 ************************************ 00:06:31.869 START TEST app_repeat 00:06:31.869 ************************************ 00:06:31.869 19:01:06 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:31.869 19:01:06 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.869 19:01:06 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.869 19:01:06 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:31.869 19:01:06 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.869 19:01:06 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:31.869 19:01:06 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:31.869 19:01:06 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:31.869 19:01:06 event.app_repeat -- event/event.sh@19 -- # repeat_pid=120647 00:06:31.869 19:01:06 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:31.869 19:01:06 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:31.869 19:01:06 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 120647' 00:06:31.869 Process app_repeat pid: 120647 00:06:31.869 19:01:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:31.869 19:01:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:31.869 spdk_app_start Round 0 00:06:31.869 19:01:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 120647 /var/tmp/spdk-nbd.sock 00:06:31.869 19:01:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 120647 ']' 00:06:31.869 19:01:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.869 19:01:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.869 19:01:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.869 19:01:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.869 19:01:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.869 [2024-12-13 19:01:06.107558] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:31.869 [2024-12-13 19:01:06.107619] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120647 ] 00:06:31.869 [2024-12-13 19:01:06.201376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.869 [2024-12-13 19:01:06.225428] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.869 [2024-12-13 19:01:06.225429] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.130 19:01:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.130 19:01:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:32.130 19:01:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.130 Malloc0 00:06:32.392 19:01:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.392 Malloc1 00:06:32.392 19:01:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.392 19:01:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.392 19:01:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.392 19:01:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:32.392 19:01:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.392 19:01:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:32.392 19:01:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.392 19:01:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.392 19:01:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.392 19:01:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:32.392 19:01:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.392 19:01:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:32.392 19:01:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:32.392 19:01:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:32.392 19:01:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.392 19:01:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:32.653 /dev/nbd0 00:06:32.653 19:01:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.653 19:01:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.653 19:01:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:32.653 19:01:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:32.653 19:01:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:32.653 19:01:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:32.653 19:01:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:32.653 19:01:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:32.653 19:01:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:32.653 19:01:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:32.653 19:01:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.653 1+0 records in 00:06:32.653 1+0 records out 00:06:32.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231676 s, 17.7 MB/s 00:06:32.653 19:01:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:32.653 19:01:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:32.653 19:01:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:32.653 19:01:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:32.653 19:01:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:32.653 19:01:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.653 19:01:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.653 19:01:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:32.914 /dev/nbd1 00:06:32.914 19:01:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:32.914 19:01:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:32.914 19:01:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:32.914 19:01:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:32.914 19:01:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:32.914 19:01:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:32.914 19:01:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:32.914 19:01:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:32.914 19:01:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:32.914 19:01:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:32.914 19:01:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.914 1+0 records in 00:06:32.914 1+0 records out 00:06:32.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228087 s, 18.0 MB/s 00:06:32.914 19:01:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:32.914 19:01:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:32.914 19:01:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:32.914 19:01:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:32.914 19:01:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:32.914 19:01:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.914 19:01:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.914 19:01:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.914 19:01:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.914 19:01:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.175 19:01:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:33.175 { 00:06:33.175 "nbd_device": "/dev/nbd0", 00:06:33.175 "bdev_name": "Malloc0" 00:06:33.175 }, 00:06:33.175 { 00:06:33.175 "nbd_device": "/dev/nbd1", 00:06:33.175 "bdev_name": "Malloc1" 00:06:33.175 } 00:06:33.175 ]' 00:06:33.175 19:01:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:33.175 { 00:06:33.175 "nbd_device": "/dev/nbd0", 00:06:33.175 "bdev_name": "Malloc0" 00:06:33.175 }, 00:06:33.175 { 00:06:33.175 "nbd_device": "/dev/nbd1", 00:06:33.175 "bdev_name": "Malloc1" 00:06:33.175 } 00:06:33.175 ]' 00:06:33.175 19:01:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.175 19:01:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:33.175 /dev/nbd1' 00:06:33.175 19:01:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:33.175 /dev/nbd1' 00:06:33.175 19:01:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.175 19:01:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:33.175 19:01:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:33.175 19:01:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:33.175 19:01:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:33.175 19:01:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:33.175 19:01:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.175 19:01:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.175 19:01:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:33.175 19:01:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.175 19:01:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:33.175 19:01:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:33.175 256+0 records in 00:06:33.175 256+0 records out 00:06:33.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010988 s, 95.4 MB/s 00:06:33.175 19:01:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.175 19:01:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:33.175 256+0 records in 00:06:33.175 256+0 records out 00:06:33.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194571 s, 53.9 MB/s 00:06:33.175 19:01:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.175 19:01:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:33.436 256+0 records in 00:06:33.436 256+0 records out 00:06:33.436 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205126 s, 51.1 MB/s 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.436 19:01:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:33.697 19:01:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:33.697 19:01:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:33.697 19:01:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:33.697 19:01:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.697 19:01:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.697 19:01:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:33.697 19:01:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.697 19:01:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.697 19:01:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.697 19:01:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.697 19:01:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.958 19:01:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.958 19:01:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.958 19:01:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.958 19:01:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.958 19:01:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.958 19:01:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.958 19:01:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:33.958 19:01:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:33.958 19:01:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:33.958 19:01:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:33.958 19:01:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:33.958 19:01:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:33.958 19:01:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:34.219 19:01:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:34.480 [2024-12-13 19:01:08.623284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.480 [2024-12-13 19:01:08.642950] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.480 [2024-12-13 19:01:08.642951] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.480 [2024-12-13 19:01:08.683609] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:34.480 [2024-12-13 19:01:08.683650] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:37.781 19:01:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:37.781 19:01:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:37.781 spdk_app_start Round 1 00:06:37.781 19:01:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 120647 /var/tmp/spdk-nbd.sock 00:06:37.781 19:01:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 120647 ']' 00:06:37.781 19:01:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.781 19:01:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.781 19:01:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.781 19:01:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.781 19:01:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.781 19:01:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.781 19:01:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:37.781 19:01:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.781 Malloc0 00:06:37.781 19:01:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.781 Malloc1 00:06:37.781 19:01:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.781 19:01:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.781 19:01:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.781 19:01:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:37.781 19:01:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.781 19:01:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:37.781 19:01:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.781 19:01:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.781 19:01:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.781 19:01:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:37.781 19:01:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.781 19:01:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:37.781 19:01:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:37.781 19:01:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:37.781 19:01:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.781 19:01:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:38.042 /dev/nbd0 00:06:38.042 19:01:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:38.042 19:01:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:38.042 19:01:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:38.042 19:01:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:38.042 19:01:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:38.042 19:01:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:38.042 19:01:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:38.042 19:01:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:38.042 19:01:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:38.042 19:01:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:38.042 19:01:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.042 1+0 records in 00:06:38.042 1+0 records out 00:06:38.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253436 s, 16.2 MB/s 00:06:38.042 19:01:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:38.042 19:01:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:38.042 19:01:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:38.042 19:01:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:38.042 19:01:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:38.042 19:01:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.042 19:01:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.042 19:01:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:38.303 /dev/nbd1 00:06:38.303 19:01:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:38.303 19:01:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:38.303 19:01:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:38.303 19:01:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:38.303 19:01:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:38.303 19:01:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:38.303 19:01:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:38.303 19:01:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:38.303 19:01:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:38.303 19:01:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:38.303 19:01:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.303 1+0 records in 00:06:38.303 1+0 records out 00:06:38.303 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261632 s, 15.7 MB/s 00:06:38.303 19:01:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:38.303 19:01:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:38.303 19:01:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:38.303 19:01:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:38.303 19:01:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:38.303 19:01:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.303 19:01:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.303 19:01:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.303 19:01:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.303 19:01:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.564 19:01:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:38.564 { 00:06:38.564 "nbd_device": "/dev/nbd0", 00:06:38.564 "bdev_name": "Malloc0" 00:06:38.564 }, 00:06:38.564 { 00:06:38.564 "nbd_device": "/dev/nbd1", 00:06:38.564 "bdev_name": "Malloc1" 00:06:38.564 } 00:06:38.564 ]' 00:06:38.564 19:01:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.564 19:01:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:38.564 { 00:06:38.564 "nbd_device": "/dev/nbd0", 00:06:38.564 "bdev_name": "Malloc0" 00:06:38.564 }, 00:06:38.564 { 00:06:38.564 "nbd_device": "/dev/nbd1", 00:06:38.564 "bdev_name": "Malloc1" 00:06:38.564 } 00:06:38.564 ]' 00:06:38.564 19:01:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:38.564 /dev/nbd1' 00:06:38.564 19:01:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:38.564 /dev/nbd1' 00:06:38.564 19:01:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.564 19:01:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:38.564 19:01:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:38.564 19:01:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:38.564 19:01:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:38.564 19:01:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:38.564 19:01:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.564 19:01:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.564 19:01:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:38.564 19:01:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:38.564 19:01:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:38.564 19:01:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:38.564 256+0 records in 00:06:38.564 256+0 records out 00:06:38.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106682 s, 98.3 MB/s 00:06:38.564 19:01:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.564 19:01:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:38.564 256+0 records in 00:06:38.564 256+0 records out 00:06:38.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193705 s, 54.1 MB/s 00:06:38.564 19:01:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.564 19:01:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:38.564 256+0 records in 00:06:38.564 256+0 records out 00:06:38.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205418 s, 51.0 MB/s 00:06:38.825 19:01:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:38.825 19:01:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.825 19:01:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.825 19:01:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:38.825 19:01:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:38.825 19:01:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:38.825 19:01:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:38.825 19:01:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.825 19:01:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:38.825 19:01:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.825 19:01:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:38.825 19:01:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:38.825 19:01:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:38.825 19:01:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.825 19:01:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.825 19:01:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:38.825 19:01:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:38.825 19:01:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.825 19:01:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:38.825 19:01:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:38.825 19:01:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:38.825 19:01:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:38.825 19:01:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.825 19:01:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.825 19:01:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:38.825 19:01:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.825 19:01:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.825 19:01:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.825 19:01:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:39.086 19:01:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:39.086 19:01:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:39.086 19:01:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:39.086 19:01:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.086 19:01:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.086 19:01:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:39.086 19:01:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.086 19:01:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.086 19:01:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.086 19:01:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.086 19:01:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.347 19:01:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:39.347 19:01:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:39.347 19:01:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.347 19:01:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:39.347 19:01:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:39.347 19:01:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.347 19:01:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:39.347 19:01:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:39.347 19:01:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:39.347 19:01:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:39.347 19:01:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:39.347 19:01:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:39.347 19:01:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:39.607 19:01:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:39.607 [2024-12-13 19:01:13.982259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.867 [2024-12-13 19:01:14.001849] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.867 [2024-12-13 19:01:14.001850] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.867 [2024-12-13 19:01:14.043691] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:39.867 [2024-12-13 19:01:14.043730] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:43.164 19:01:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:43.164 19:01:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:43.164 spdk_app_start Round 2 00:06:43.164 19:01:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 120647 /var/tmp/spdk-nbd.sock 00:06:43.164 19:01:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 120647 ']' 00:06:43.164 19:01:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:43.164 19:01:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.164 19:01:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:43.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:43.164 19:01:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.164 19:01:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.164 19:01:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.164 19:01:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:43.164 19:01:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:43.164 Malloc0 00:06:43.164 19:01:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:43.164 Malloc1 00:06:43.164 19:01:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.164 19:01:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.164 19:01:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.164 19:01:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:43.164 19:01:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.164 19:01:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:43.164 19:01:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.164 19:01:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.164 19:01:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.164 19:01:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:43.164 19:01:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.164 19:01:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:43.164 19:01:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:43.164 19:01:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:43.164 19:01:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.164 19:01:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:43.425 /dev/nbd0 00:06:43.425 19:01:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:43.425 19:01:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:43.425 19:01:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:43.425 19:01:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:43.425 19:01:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.425 19:01:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.425 19:01:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:43.425 19:01:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:43.425 19:01:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.425 19:01:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.425 19:01:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.425 1+0 records in 00:06:43.425 1+0 records out 00:06:43.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274111 s, 14.9 MB/s 00:06:43.425 19:01:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:43.425 19:01:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:43.425 19:01:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:43.425 19:01:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.425 19:01:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:43.425 19:01:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.425 19:01:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.425 19:01:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:43.685 /dev/nbd1 00:06:43.685 19:01:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:43.685 19:01:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:43.685 19:01:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:43.685 19:01:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:43.685 19:01:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.685 19:01:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.685 19:01:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:43.685 19:01:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:43.685 19:01:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.685 19:01:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.685 19:01:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.685 1+0 records in 00:06:43.685 1+0 records out 00:06:43.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021865 s, 18.7 MB/s 00:06:43.685 19:01:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:43.685 19:01:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:43.685 19:01:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:43.685 19:01:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.685 19:01:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:43.685 19:01:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.685 19:01:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.685 19:01:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:43.685 19:01:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.685 19:01:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:43.946 { 00:06:43.946 "nbd_device": "/dev/nbd0", 00:06:43.946 "bdev_name": "Malloc0" 00:06:43.946 }, 00:06:43.946 { 00:06:43.946 "nbd_device": "/dev/nbd1", 00:06:43.946 "bdev_name": "Malloc1" 00:06:43.946 } 00:06:43.946 ]' 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:43.946 { 00:06:43.946 "nbd_device": "/dev/nbd0", 00:06:43.946 "bdev_name": "Malloc0" 00:06:43.946 }, 00:06:43.946 { 00:06:43.946 "nbd_device": "/dev/nbd1", 00:06:43.946 "bdev_name": "Malloc1" 00:06:43.946 } 00:06:43.946 ]' 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:43.946 /dev/nbd1' 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:43.946 /dev/nbd1' 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:43.946 256+0 records in 00:06:43.946 256+0 records out 00:06:43.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115333 s, 90.9 MB/s 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:43.946 256+0 records in 00:06:43.946 256+0 records out 00:06:43.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192717 s, 54.4 MB/s 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:43.946 256+0 records in 00:06:43.946 256+0 records out 00:06:43.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203285 s, 51.6 MB/s 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.946 19:01:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:44.207 19:01:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:44.207 19:01:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:44.207 19:01:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:44.207 19:01:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.207 19:01:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.207 19:01:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:44.207 19:01:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:44.207 19:01:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.207 19:01:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.207 19:01:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:44.468 19:01:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:44.468 19:01:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:44.468 19:01:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:44.468 19:01:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.468 19:01:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.468 19:01:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:44.468 19:01:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:44.468 19:01:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.468 19:01:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.468 19:01:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.468 19:01:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.728 19:01:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:44.728 19:01:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:44.728 19:01:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.728 19:01:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:44.728 19:01:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:44.728 19:01:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.728 19:01:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:44.728 19:01:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:44.728 19:01:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:44.728 19:01:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:44.728 19:01:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:44.728 19:01:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:44.728 19:01:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:44.989 19:01:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:44.989 [2024-12-13 19:01:19.330172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:44.989 [2024-12-13 19:01:19.349500] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.989 [2024-12-13 19:01:19.349500] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.249 [2024-12-13 19:01:19.390332] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:45.250 [2024-12-13 19:01:19.390371] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:48.547 19:01:22 event.app_repeat -- event/event.sh@38 -- # waitforlisten 120647 /var/tmp/spdk-nbd.sock 00:06:48.547 19:01:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 120647 ']' 00:06:48.547 19:01:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:48.547 19:01:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.547 19:01:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:48.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:48.547 19:01:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.547 19:01:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:48.547 19:01:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.547 19:01:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:48.547 19:01:22 event.app_repeat -- event/event.sh@39 -- # killprocess 120647 00:06:48.547 19:01:22 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 120647 ']' 00:06:48.547 19:01:22 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 120647 00:06:48.547 19:01:22 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:48.547 19:01:22 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.547 19:01:22 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 120647 00:06:48.547 19:01:22 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.547 19:01:22 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.547 19:01:22 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 120647' 00:06:48.547 killing process with pid 120647 00:06:48.547 19:01:22 event.app_repeat -- common/autotest_common.sh@973 -- # kill 120647 00:06:48.547 19:01:22 event.app_repeat -- common/autotest_common.sh@978 -- # wait 120647 00:06:48.547 spdk_app_start is called in Round 0. 00:06:48.547 Shutdown signal received, stop current app iteration 00:06:48.547 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:06:48.547 spdk_app_start is called in Round 1. 00:06:48.547 Shutdown signal received, stop current app iteration 00:06:48.547 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:06:48.547 spdk_app_start is called in Round 2. 00:06:48.547 Shutdown signal received, stop current app iteration 00:06:48.547 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:06:48.547 spdk_app_start is called in Round 3. 00:06:48.547 Shutdown signal received, stop current app iteration 00:06:48.547 19:01:22 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:48.547 19:01:22 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:48.547 00:06:48.547 real 0m16.521s 00:06:48.547 user 0m35.962s 00:06:48.547 sys 0m3.059s 00:06:48.547 19:01:22 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.547 19:01:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:48.547 ************************************ 00:06:48.547 END TEST app_repeat 00:06:48.547 ************************************ 00:06:48.547 19:01:22 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:48.547 19:01:22 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:48.547 19:01:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.547 19:01:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.547 19:01:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:48.547 ************************************ 00:06:48.547 START TEST cpu_locks 00:06:48.547 ************************************ 00:06:48.547 19:01:22 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:48.547 * Looking for test storage... 00:06:48.547 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:48.547 19:01:22 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:48.547 19:01:22 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:48.547 19:01:22 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:48.547 19:01:22 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:48.547 19:01:22 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.547 19:01:22 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.547 19:01:22 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.547 19:01:22 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.547 19:01:22 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.547 19:01:22 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.547 19:01:22 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.547 19:01:22 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.547 19:01:22 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.547 19:01:22 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.547 19:01:22 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.547 19:01:22 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:48.547 19:01:22 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:48.547 19:01:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.547 19:01:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.547 19:01:22 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:48.548 19:01:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:48.548 19:01:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.548 19:01:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:48.548 19:01:22 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.548 19:01:22 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:48.548 19:01:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:48.548 19:01:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.548 19:01:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:48.548 19:01:22 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.548 19:01:22 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.548 19:01:22 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.548 19:01:22 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:48.548 19:01:22 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.548 19:01:22 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:48.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.548 --rc genhtml_branch_coverage=1 00:06:48.548 --rc genhtml_function_coverage=1 00:06:48.548 --rc genhtml_legend=1 00:06:48.548 --rc geninfo_all_blocks=1 00:06:48.548 --rc geninfo_unexecuted_blocks=1 00:06:48.548 00:06:48.548 ' 00:06:48.548 19:01:22 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:48.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.548 --rc genhtml_branch_coverage=1 00:06:48.548 --rc genhtml_function_coverage=1 00:06:48.548 --rc genhtml_legend=1 00:06:48.548 --rc geninfo_all_blocks=1 00:06:48.548 --rc geninfo_unexecuted_blocks=1 00:06:48.548 00:06:48.548 ' 00:06:48.548 19:01:22 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:48.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.548 --rc genhtml_branch_coverage=1 00:06:48.548 --rc genhtml_function_coverage=1 00:06:48.548 --rc genhtml_legend=1 00:06:48.548 --rc geninfo_all_blocks=1 00:06:48.548 --rc geninfo_unexecuted_blocks=1 00:06:48.548 00:06:48.548 ' 00:06:48.548 19:01:22 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:48.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.548 --rc genhtml_branch_coverage=1 00:06:48.548 --rc genhtml_function_coverage=1 00:06:48.548 --rc genhtml_legend=1 00:06:48.548 --rc geninfo_all_blocks=1 00:06:48.548 --rc geninfo_unexecuted_blocks=1 00:06:48.548 00:06:48.548 ' 00:06:48.548 19:01:22 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:48.548 19:01:22 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:48.548 19:01:22 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:48.548 19:01:22 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:48.548 19:01:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.548 19:01:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.548 19:01:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.548 ************************************ 00:06:48.548 START TEST default_locks 00:06:48.548 ************************************ 00:06:48.548 19:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:48.548 19:01:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=123815 00:06:48.548 19:01:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.548 19:01:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 123815 00:06:48.548 19:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 123815 ']' 00:06:48.548 19:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.548 19:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.548 19:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.548 19:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.548 19:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.809 [2024-12-13 19:01:22.961215] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:48.809 [2024-12-13 19:01:22.961259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123815 ] 00:06:48.809 [2024-12-13 19:01:23.048696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.809 [2024-12-13 19:01:23.070121] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.070 19:01:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.070 19:01:23 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:49.071 19:01:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 123815 00:06:49.071 19:01:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 123815 00:06:49.071 19:01:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.012 lslocks: write error 00:06:50.012 19:01:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 123815 00:06:50.012 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 123815 ']' 00:06:50.013 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 123815 00:06:50.013 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:50.013 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.013 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 123815 00:06:50.013 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.013 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.013 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 123815' 00:06:50.013 killing process with pid 123815 00:06:50.013 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 123815 00:06:50.013 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 123815 00:06:50.013 19:01:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 123815 00:06:50.013 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:50.013 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 123815 00:06:50.013 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:50.273 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.273 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:50.273 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.273 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 123815 00:06:50.273 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 123815 ']' 00:06:50.273 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.273 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.273 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.273 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.273 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.273 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (123815) - No such process 00:06:50.273 ERROR: process (pid: 123815) is no longer running 00:06:50.273 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.273 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:50.273 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:50.273 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.273 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:50.273 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.273 19:01:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:50.273 19:01:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:50.273 19:01:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:50.273 19:01:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:50.273 00:06:50.273 real 0m1.486s 00:06:50.273 user 0m1.459s 00:06:50.273 sys 0m0.713s 00:06:50.273 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.273 19:01:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.273 ************************************ 00:06:50.273 END TEST default_locks 00:06:50.273 ************************************ 00:06:50.273 19:01:24 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:50.273 19:01:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.273 19:01:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.273 19:01:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.273 ************************************ 00:06:50.273 START TEST default_locks_via_rpc 00:06:50.273 ************************************ 00:06:50.273 19:01:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:50.273 19:01:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=124133 00:06:50.273 19:01:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 124133 00:06:50.273 19:01:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:50.273 19:01:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 124133 ']' 00:06:50.273 19:01:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.273 19:01:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.273 19:01:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.273 19:01:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.273 19:01:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.273 [2024-12-13 19:01:24.536896] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:50.273 [2024-12-13 19:01:24.536948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124133 ] 00:06:50.273 [2024-12-13 19:01:24.628407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.534 [2024-12-13 19:01:24.650722] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.534 19:01:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.534 19:01:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:50.534 19:01:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:50.534 19:01:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.534 19:01:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.534 19:01:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.534 19:01:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:50.534 19:01:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:50.534 19:01:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:50.534 19:01:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:50.534 19:01:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:50.534 19:01:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.534 19:01:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.534 19:01:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.534 19:01:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 124133 00:06:50.534 19:01:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 124133 00:06:50.534 19:01:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.106 19:01:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 124133 00:06:51.106 19:01:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 124133 ']' 00:06:51.106 19:01:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 124133 00:06:51.106 19:01:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:51.106 19:01:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.106 19:01:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 124133 00:06:51.106 19:01:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.106 19:01:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.106 19:01:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 124133' 00:06:51.106 killing process with pid 124133 00:06:51.106 19:01:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 124133 00:06:51.106 19:01:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 124133 00:06:51.366 00:06:51.366 real 0m1.123s 00:06:51.366 user 0m1.073s 00:06:51.366 sys 0m0.550s 00:06:51.366 19:01:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.366 19:01:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.366 ************************************ 00:06:51.366 END TEST default_locks_via_rpc 00:06:51.366 ************************************ 00:06:51.366 19:01:25 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:51.366 19:01:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.366 19:01:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.366 19:01:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.366 ************************************ 00:06:51.366 START TEST non_locking_app_on_locked_coremask 00:06:51.366 ************************************ 00:06:51.366 19:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:51.366 19:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=124257 00:06:51.366 19:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 124257 /var/tmp/spdk.sock 00:06:51.366 19:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.366 19:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 124257 ']' 00:06:51.367 19:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.367 19:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.367 19:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.367 19:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.367 19:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.627 [2024-12-13 19:01:25.746555] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:51.627 [2024-12-13 19:01:25.746605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124257 ] 00:06:51.627 [2024-12-13 19:01:25.840716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.627 [2024-12-13 19:01:25.862850] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.888 19:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.888 19:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:51.888 19:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=124450 00:06:51.888 19:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 124450 /var/tmp/spdk2.sock 00:06:51.888 19:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:51.888 19:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 124450 ']' 00:06:51.888 19:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.888 19:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.888 19:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.888 19:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.888 19:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.888 [2024-12-13 19:01:26.110912] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:51.888 [2024-12-13 19:01:26.110964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124450 ] 00:06:51.888 [2024-12-13 19:01:26.221659] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.888 [2024-12-13 19:01:26.221685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.149 [2024-12-13 19:01:26.268615] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.720 19:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.720 19:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:52.720 19:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 124257 00:06:52.720 19:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 124257 00:06:52.720 19:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:54.103 lslocks: write error 00:06:54.103 19:01:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 124257 00:06:54.103 19:01:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 124257 ']' 00:06:54.103 19:01:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 124257 00:06:54.103 19:01:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:54.103 19:01:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.103 19:01:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 124257 00:06:54.103 19:01:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.103 19:01:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.103 19:01:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 124257' 00:06:54.103 killing process with pid 124257 00:06:54.103 19:01:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 124257 00:06:54.103 19:01:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 124257 00:06:54.675 19:01:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 124450 00:06:54.675 19:01:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 124450 ']' 00:06:54.675 19:01:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 124450 00:06:54.675 19:01:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:54.675 19:01:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.675 19:01:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 124450 00:06:54.675 19:01:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.675 19:01:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.675 19:01:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 124450' 00:06:54.675 killing process with pid 124450 00:06:54.675 19:01:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 124450 00:06:54.675 19:01:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 124450 00:06:54.936 00:06:54.936 real 0m3.428s 00:06:54.936 user 0m3.593s 00:06:54.936 sys 0m1.282s 00:06:54.936 19:01:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.936 19:01:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.936 ************************************ 00:06:54.936 END TEST non_locking_app_on_locked_coremask 00:06:54.936 ************************************ 00:06:54.936 19:01:29 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:54.936 19:01:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.936 19:01:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.936 19:01:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.936 ************************************ 00:06:54.936 START TEST locking_app_on_unlocked_coremask 00:06:54.936 ************************************ 00:06:54.936 19:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:54.936 19:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=125018 00:06:54.936 19:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 125018 /var/tmp/spdk.sock 00:06:54.936 19:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:54.936 19:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 125018 ']' 00:06:54.936 19:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.936 19:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.936 19:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.936 19:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.936 19:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.936 [2024-12-13 19:01:29.261986] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:54.936 [2024-12-13 19:01:29.262035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125018 ] 00:06:55.197 [2024-12-13 19:01:29.352207] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:55.197 [2024-12-13 19:01:29.352231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.197 [2024-12-13 19:01:29.374801] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.197 19:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.197 19:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:55.197 19:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=125031 00:06:55.197 19:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 125031 /var/tmp/spdk2.sock 00:06:55.197 19:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:55.197 19:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 125031 ']' 00:06:55.197 19:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.197 19:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.197 19:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.197 19:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.197 19:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.457 [2024-12-13 19:01:29.619397] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:55.458 [2024-12-13 19:01:29.619452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125031 ] 00:06:55.458 [2024-12-13 19:01:29.730301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.458 [2024-12-13 19:01:29.776955] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.398 19:01:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.398 19:01:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:56.398 19:01:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 125031 00:06:56.398 19:01:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 125031 00:06:56.398 19:01:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.339 lslocks: write error 00:06:57.339 19:01:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 125018 00:06:57.339 19:01:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 125018 ']' 00:06:57.339 19:01:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 125018 00:06:57.339 19:01:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:57.339 19:01:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.339 19:01:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125018 00:06:57.600 19:01:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.600 19:01:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.600 19:01:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125018' 00:06:57.600 killing process with pid 125018 00:06:57.600 19:01:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 125018 00:06:57.600 19:01:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 125018 00:06:58.171 19:01:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 125031 00:06:58.171 19:01:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 125031 ']' 00:06:58.171 19:01:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 125031 00:06:58.171 19:01:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:58.171 19:01:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.171 19:01:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125031 00:06:58.171 19:01:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.172 19:01:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.172 19:01:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125031' 00:06:58.172 killing process with pid 125031 00:06:58.172 19:01:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 125031 00:06:58.172 19:01:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 125031 00:06:58.433 00:06:58.433 real 0m3.460s 00:06:58.433 user 0m3.630s 00:06:58.433 sys 0m1.332s 00:06:58.433 19:01:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.433 19:01:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.433 ************************************ 00:06:58.433 END TEST locking_app_on_unlocked_coremask 00:06:58.433 ************************************ 00:06:58.433 19:01:32 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:58.433 19:01:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.433 19:01:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.433 19:01:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.433 ************************************ 00:06:58.433 START TEST locking_app_on_locked_coremask 00:06:58.433 ************************************ 00:06:58.433 19:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:58.433 19:01:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=125603 00:06:58.433 19:01:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 125603 /var/tmp/spdk.sock 00:06:58.433 19:01:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:58.433 19:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 125603 ']' 00:06:58.433 19:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.433 19:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.433 19:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.433 19:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.433 19:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.433 [2024-12-13 19:01:32.808291] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:58.433 [2024-12-13 19:01:32.808340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125603 ] 00:06:58.693 [2024-12-13 19:01:32.900610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.693 [2024-12-13 19:01:32.922780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.954 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.954 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:58.954 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=125610 00:06:58.954 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 125610 /var/tmp/spdk2.sock 00:06:58.954 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:58.954 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:58.954 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 125610 /var/tmp/spdk2.sock 00:06:58.954 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:58.954 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.954 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:58.954 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.954 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 125610 /var/tmp/spdk2.sock 00:06:58.954 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 125610 ']' 00:06:58.954 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.954 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.954 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.954 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.954 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.954 [2024-12-13 19:01:33.168073] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:58.954 [2024-12-13 19:01:33.168123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125610 ] 00:06:58.954 [2024-12-13 19:01:33.274838] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 125603 has claimed it. 00:06:58.954 [2024-12-13 19:01:33.274871] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:59.524 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (125610) - No such process 00:06:59.524 ERROR: process (pid: 125610) is no longer running 00:06:59.524 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.524 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:59.524 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:59.524 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:59.524 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:59.524 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:59.524 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 125603 00:06:59.524 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 125603 00:06:59.524 19:01:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:59.785 lslocks: write error 00:06:59.785 19:01:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 125603 00:06:59.785 19:01:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 125603 ']' 00:06:59.785 19:01:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 125603 00:06:59.785 19:01:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:59.785 19:01:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.785 19:01:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125603 00:07:00.046 19:01:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.046 19:01:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.046 19:01:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125603' 00:07:00.046 killing process with pid 125603 00:07:00.046 19:01:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 125603 00:07:00.046 19:01:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 125603 00:07:00.308 00:07:00.308 real 0m1.716s 00:07:00.308 user 0m1.825s 00:07:00.308 sys 0m0.639s 00:07:00.308 19:01:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.308 19:01:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.308 ************************************ 00:07:00.308 END TEST locking_app_on_locked_coremask 00:07:00.308 ************************************ 00:07:00.308 19:01:34 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:00.308 19:01:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.308 19:01:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.308 19:01:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.308 ************************************ 00:07:00.308 START TEST locking_overlapped_coremask 00:07:00.308 ************************************ 00:07:00.308 19:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:00.308 19:01:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=125903 00:07:00.308 19:01:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 125903 /var/tmp/spdk.sock 00:07:00.308 19:01:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:00.308 19:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 125903 ']' 00:07:00.308 19:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.308 19:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.308 19:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.308 19:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.308 19:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.308 [2024-12-13 19:01:34.610980] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:00.308 [2024-12-13 19:01:34.611028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125903 ] 00:07:00.568 [2024-12-13 19:01:34.702084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.568 [2024-12-13 19:01:34.727149] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.568 [2024-12-13 19:01:34.727256] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.568 [2024-12-13 19:01:34.727257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.568 19:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.568 19:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:00.568 19:01:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=125990 00:07:00.568 19:01:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:00.568 19:01:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 125990 /var/tmp/spdk2.sock 00:07:00.568 19:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:00.568 19:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 125990 /var/tmp/spdk2.sock 00:07:00.568 19:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:00.568 19:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.568 19:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:00.568 19:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.568 19:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 125990 /var/tmp/spdk2.sock 00:07:00.568 19:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 125990 ']' 00:07:00.568 19:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.568 19:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.568 19:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.568 19:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.568 19:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.828 [2024-12-13 19:01:34.973368] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:00.828 [2024-12-13 19:01:34.973421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125990 ] 00:07:00.828 [2024-12-13 19:01:35.084189] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 125903 has claimed it. 00:07:00.828 [2024-12-13 19:01:35.084222] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:01.397 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (125990) - No such process 00:07:01.397 ERROR: process (pid: 125990) is no longer running 00:07:01.397 19:01:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.397 19:01:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:01.397 19:01:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:01.397 19:01:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:01.397 19:01:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:01.397 19:01:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:01.397 19:01:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:01.397 19:01:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:01.397 19:01:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:01.397 19:01:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:01.397 19:01:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 125903 00:07:01.397 19:01:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 125903 ']' 00:07:01.397 19:01:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 125903 00:07:01.397 19:01:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:01.397 19:01:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.397 19:01:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125903 00:07:01.397 19:01:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.397 19:01:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.398 19:01:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125903' 00:07:01.398 killing process with pid 125903 00:07:01.398 19:01:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 125903 00:07:01.398 19:01:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 125903 00:07:01.658 00:07:01.658 real 0m1.420s 00:07:01.658 user 0m3.915s 00:07:01.658 sys 0m0.430s 00:07:01.658 19:01:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.658 19:01:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.658 ************************************ 00:07:01.658 END TEST locking_overlapped_coremask 00:07:01.658 ************************************ 00:07:01.658 19:01:36 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:01.658 19:01:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.658 19:01:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.658 19:01:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.918 ************************************ 00:07:01.918 START TEST locking_overlapped_coremask_via_rpc 00:07:01.918 ************************************ 00:07:01.918 19:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:01.918 19:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:01.918 19:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=126220 00:07:01.918 19:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 126220 /var/tmp/spdk.sock 00:07:01.918 19:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 126220 ']' 00:07:01.918 19:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.918 19:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.918 19:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.918 19:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.918 19:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.918 [2024-12-13 19:01:36.100456] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:01.919 [2024-12-13 19:01:36.100499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126220 ] 00:07:01.919 [2024-12-13 19:01:36.190203] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:01.919 [2024-12-13 19:01:36.190230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.919 [2024-12-13 19:01:36.213885] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.919 [2024-12-13 19:01:36.213995] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.919 [2024-12-13 19:01:36.213997] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.180 19:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.180 19:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:02.180 19:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=126270 00:07:02.180 19:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 126270 /var/tmp/spdk2.sock 00:07:02.180 19:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:02.180 19:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 126270 ']' 00:07:02.180 19:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:02.180 19:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.180 19:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:02.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:02.180 19:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.180 19:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.180 [2024-12-13 19:01:36.475518] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:02.180 [2024-12-13 19:01:36.475573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126270 ] 00:07:02.440 [2024-12-13 19:01:36.588381] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:02.440 [2024-12-13 19:01:36.588412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:02.440 [2024-12-13 19:01:36.641337] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.440 [2024-12-13 19:01:36.641430] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.440 [2024-12-13 19:01:36.641431] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.011 [2024-12-13 19:01:37.324115] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 126220 has claimed it. 00:07:03.011 request: 00:07:03.011 { 00:07:03.011 "method": "framework_enable_cpumask_locks", 00:07:03.011 "req_id": 1 00:07:03.011 } 00:07:03.011 Got JSON-RPC error response 00:07:03.011 response: 00:07:03.011 { 00:07:03.011 "code": -32603, 00:07:03.011 "message": "Failed to claim CPU core: 2" 00:07:03.011 } 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 126220 /var/tmp/spdk.sock 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 126220 ']' 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.011 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.272 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.272 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:03.272 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 126270 /var/tmp/spdk2.sock 00:07:03.272 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 126270 ']' 00:07:03.272 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.272 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.272 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.272 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.272 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.533 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.533 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:03.533 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:03.533 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:03.533 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:03.533 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:03.533 00:07:03.533 real 0m1.669s 00:07:03.533 user 0m0.759s 00:07:03.533 sys 0m0.187s 00:07:03.533 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.533 19:01:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.533 ************************************ 00:07:03.533 END TEST locking_overlapped_coremask_via_rpc 00:07:03.533 ************************************ 00:07:03.533 19:01:37 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:03.533 19:01:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 126220 ]] 00:07:03.533 19:01:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 126220 00:07:03.533 19:01:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 126220 ']' 00:07:03.533 19:01:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 126220 00:07:03.533 19:01:37 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:03.533 19:01:37 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.533 19:01:37 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126220 00:07:03.533 19:01:37 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.533 19:01:37 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.533 19:01:37 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126220' 00:07:03.533 killing process with pid 126220 00:07:03.533 19:01:37 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 126220 00:07:03.533 19:01:37 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 126220 00:07:03.794 19:01:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 126270 ]] 00:07:03.794 19:01:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 126270 00:07:03.794 19:01:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 126270 ']' 00:07:03.794 19:01:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 126270 00:07:03.794 19:01:38 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:03.794 19:01:38 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.794 19:01:38 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126270 00:07:04.054 19:01:38 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:04.054 19:01:38 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:04.054 19:01:38 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126270' 00:07:04.054 killing process with pid 126270 00:07:04.054 19:01:38 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 126270 00:07:04.054 19:01:38 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 126270 00:07:04.315 19:01:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:04.315 19:01:38 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:04.315 19:01:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 126220 ]] 00:07:04.315 19:01:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 126220 00:07:04.315 19:01:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 126220 ']' 00:07:04.315 19:01:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 126220 00:07:04.315 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (126220) - No such process 00:07:04.315 19:01:38 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 126220 is not found' 00:07:04.315 Process with pid 126220 is not found 00:07:04.315 19:01:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 126270 ]] 00:07:04.315 19:01:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 126270 00:07:04.315 19:01:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 126270 ']' 00:07:04.315 19:01:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 126270 00:07:04.315 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (126270) - No such process 00:07:04.315 19:01:38 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 126270 is not found' 00:07:04.315 Process with pid 126270 is not found 00:07:04.315 19:01:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:04.315 00:07:04.315 real 0m15.848s 00:07:04.315 user 0m26.068s 00:07:04.315 sys 0m6.253s 00:07:04.315 19:01:38 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.315 19:01:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.315 ************************************ 00:07:04.315 END TEST cpu_locks 00:07:04.315 ************************************ 00:07:04.315 00:07:04.315 real 0m40.767s 00:07:04.315 user 1m15.128s 00:07:04.315 sys 0m10.502s 00:07:04.315 19:01:38 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.315 19:01:38 event -- common/autotest_common.sh@10 -- # set +x 00:07:04.315 ************************************ 00:07:04.315 END TEST event 00:07:04.315 ************************************ 00:07:04.315 19:01:38 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:04.315 19:01:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.315 19:01:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.315 19:01:38 -- common/autotest_common.sh@10 -- # set +x 00:07:04.315 ************************************ 00:07:04.315 START TEST thread 00:07:04.315 ************************************ 00:07:04.315 19:01:38 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:04.575 * Looking for test storage... 00:07:04.575 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:07:04.575 19:01:38 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:04.575 19:01:38 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:07:04.575 19:01:38 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:04.575 19:01:38 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:04.575 19:01:38 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.575 19:01:38 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.575 19:01:38 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.575 19:01:38 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.575 19:01:38 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.575 19:01:38 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.575 19:01:38 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.575 19:01:38 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.575 19:01:38 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.575 19:01:38 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.575 19:01:38 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.575 19:01:38 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:04.575 19:01:38 thread -- scripts/common.sh@345 -- # : 1 00:07:04.575 19:01:38 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.575 19:01:38 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.575 19:01:38 thread -- scripts/common.sh@365 -- # decimal 1 00:07:04.575 19:01:38 thread -- scripts/common.sh@353 -- # local d=1 00:07:04.575 19:01:38 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.575 19:01:38 thread -- scripts/common.sh@355 -- # echo 1 00:07:04.575 19:01:38 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.575 19:01:38 thread -- scripts/common.sh@366 -- # decimal 2 00:07:04.575 19:01:38 thread -- scripts/common.sh@353 -- # local d=2 00:07:04.575 19:01:38 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.575 19:01:38 thread -- scripts/common.sh@355 -- # echo 2 00:07:04.575 19:01:38 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.575 19:01:38 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.575 19:01:38 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.575 19:01:38 thread -- scripts/common.sh@368 -- # return 0 00:07:04.575 19:01:38 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.575 19:01:38 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:04.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.575 --rc genhtml_branch_coverage=1 00:07:04.575 --rc genhtml_function_coverage=1 00:07:04.575 --rc genhtml_legend=1 00:07:04.575 --rc geninfo_all_blocks=1 00:07:04.575 --rc geninfo_unexecuted_blocks=1 00:07:04.575 00:07:04.575 ' 00:07:04.575 19:01:38 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:04.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.575 --rc genhtml_branch_coverage=1 00:07:04.575 --rc genhtml_function_coverage=1 00:07:04.575 --rc genhtml_legend=1 00:07:04.576 --rc geninfo_all_blocks=1 00:07:04.576 --rc geninfo_unexecuted_blocks=1 00:07:04.576 00:07:04.576 ' 00:07:04.576 19:01:38 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:04.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.576 --rc genhtml_branch_coverage=1 00:07:04.576 --rc genhtml_function_coverage=1 00:07:04.576 --rc genhtml_legend=1 00:07:04.576 --rc geninfo_all_blocks=1 00:07:04.576 --rc geninfo_unexecuted_blocks=1 00:07:04.576 00:07:04.576 ' 00:07:04.576 19:01:38 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:04.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.576 --rc genhtml_branch_coverage=1 00:07:04.576 --rc genhtml_function_coverage=1 00:07:04.576 --rc genhtml_legend=1 00:07:04.576 --rc geninfo_all_blocks=1 00:07:04.576 --rc geninfo_unexecuted_blocks=1 00:07:04.576 00:07:04.576 ' 00:07:04.576 19:01:38 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:04.576 19:01:38 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:04.576 19:01:38 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.576 19:01:38 thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.576 ************************************ 00:07:04.576 START TEST thread_poller_perf 00:07:04.576 ************************************ 00:07:04.576 19:01:38 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:04.576 [2024-12-13 19:01:38.908913] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:04.576 [2024-12-13 19:01:38.908977] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126861 ] 00:07:04.836 [2024-12-13 19:01:39.000639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.836 [2024-12-13 19:01:39.022323] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.836 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:05.777 [2024-12-13T18:01:40.155Z] ====================================== 00:07:05.777 [2024-12-13T18:01:40.155Z] busy:2507341518 (cyc) 00:07:05.777 [2024-12-13T18:01:40.155Z] total_run_count: 430000 00:07:05.777 [2024-12-13T18:01:40.155Z] tsc_hz: 2500000000 (cyc) 00:07:05.777 [2024-12-13T18:01:40.155Z] ====================================== 00:07:05.777 [2024-12-13T18:01:40.155Z] poller_cost: 5831 (cyc), 2332 (nsec) 00:07:05.777 00:07:05.777 real 0m1.174s 00:07:05.777 user 0m1.079s 00:07:05.777 sys 0m0.091s 00:07:05.777 19:01:40 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.777 19:01:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:05.777 ************************************ 00:07:05.777 END TEST thread_poller_perf 00:07:05.777 ************************************ 00:07:05.777 19:01:40 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:05.777 19:01:40 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:05.777 19:01:40 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.777 19:01:40 thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.777 ************************************ 00:07:05.777 START TEST thread_poller_perf 00:07:05.777 ************************************ 00:07:05.777 19:01:40 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:06.038 [2024-12-13 19:01:40.169196] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:06.038 [2024-12-13 19:01:40.169278] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127144 ] 00:07:06.038 [2024-12-13 19:01:40.264680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.038 [2024-12-13 19:01:40.286977] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.038 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:06.978 [2024-12-13T18:01:41.356Z] ====================================== 00:07:06.978 [2024-12-13T18:01:41.356Z] busy:2502066526 (cyc) 00:07:06.978 [2024-12-13T18:01:41.356Z] total_run_count: 5216000 00:07:06.978 [2024-12-13T18:01:41.356Z] tsc_hz: 2500000000 (cyc) 00:07:06.978 [2024-12-13T18:01:41.356Z] ====================================== 00:07:06.978 [2024-12-13T18:01:41.356Z] poller_cost: 479 (cyc), 191 (nsec) 00:07:06.978 00:07:06.978 real 0m1.176s 00:07:06.978 user 0m1.080s 00:07:06.978 sys 0m0.092s 00:07:06.978 19:01:41 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.978 19:01:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:06.978 ************************************ 00:07:06.978 END TEST thread_poller_perf 00:07:06.978 ************************************ 00:07:07.239 19:01:41 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:07.239 00:07:07.239 real 0m2.715s 00:07:07.239 user 0m2.324s 00:07:07.239 sys 0m0.413s 00:07:07.239 19:01:41 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.239 19:01:41 thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.239 ************************************ 00:07:07.239 END TEST thread 00:07:07.239 ************************************ 00:07:07.239 19:01:41 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:07.239 19:01:41 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:07.239 19:01:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.239 19:01:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.239 19:01:41 -- common/autotest_common.sh@10 -- # set +x 00:07:07.239 ************************************ 00:07:07.239 START TEST app_cmdline 00:07:07.239 ************************************ 00:07:07.239 19:01:41 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:07.239 * Looking for test storage... 00:07:07.239 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:07.239 19:01:41 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:07.239 19:01:41 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:07:07.239 19:01:41 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:07.500 19:01:41 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.500 19:01:41 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:07.500 19:01:41 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.500 19:01:41 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:07.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.500 --rc genhtml_branch_coverage=1 00:07:07.500 --rc genhtml_function_coverage=1 00:07:07.500 --rc genhtml_legend=1 00:07:07.500 --rc geninfo_all_blocks=1 00:07:07.500 --rc geninfo_unexecuted_blocks=1 00:07:07.500 00:07:07.500 ' 00:07:07.500 19:01:41 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:07.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.500 --rc genhtml_branch_coverage=1 00:07:07.500 --rc genhtml_function_coverage=1 00:07:07.500 --rc genhtml_legend=1 00:07:07.500 --rc geninfo_all_blocks=1 00:07:07.500 --rc geninfo_unexecuted_blocks=1 00:07:07.500 00:07:07.500 ' 00:07:07.500 19:01:41 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:07.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.500 --rc genhtml_branch_coverage=1 00:07:07.500 --rc genhtml_function_coverage=1 00:07:07.500 --rc genhtml_legend=1 00:07:07.500 --rc geninfo_all_blocks=1 00:07:07.500 --rc geninfo_unexecuted_blocks=1 00:07:07.500 00:07:07.501 ' 00:07:07.501 19:01:41 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:07.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.501 --rc genhtml_branch_coverage=1 00:07:07.501 --rc genhtml_function_coverage=1 00:07:07.501 --rc genhtml_legend=1 00:07:07.501 --rc geninfo_all_blocks=1 00:07:07.501 --rc geninfo_unexecuted_blocks=1 00:07:07.501 00:07:07.501 ' 00:07:07.501 19:01:41 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:07.501 19:01:41 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=127476 00:07:07.501 19:01:41 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:07.501 19:01:41 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 127476 00:07:07.501 19:01:41 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 127476 ']' 00:07:07.501 19:01:41 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.501 19:01:41 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.501 19:01:41 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.501 19:01:41 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.501 19:01:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:07.501 [2024-12-13 19:01:41.697148] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:07.501 [2024-12-13 19:01:41.697204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127476 ] 00:07:07.501 [2024-12-13 19:01:41.772356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.501 [2024-12-13 19:01:41.794761] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.761 19:01:42 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.761 19:01:42 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:07.761 19:01:42 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:08.022 { 00:07:08.022 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:07:08.022 "fields": { 00:07:08.022 "major": 25, 00:07:08.022 "minor": 1, 00:07:08.022 "patch": 0, 00:07:08.022 "suffix": "-pre", 00:07:08.022 "commit": "e01cb43b8" 00:07:08.022 } 00:07:08.022 } 00:07:08.022 19:01:42 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:08.022 19:01:42 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:08.022 19:01:42 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:08.022 19:01:42 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:08.022 19:01:42 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:08.023 19:01:42 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:08.023 19:01:42 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:08.023 19:01:42 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.023 19:01:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:08.023 19:01:42 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.023 19:01:42 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:08.023 19:01:42 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:08.023 19:01:42 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:08.023 19:01:42 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:08.023 19:01:42 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:08.023 19:01:42 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:08.023 19:01:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.023 19:01:42 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:08.023 19:01:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.023 19:01:42 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:08.023 19:01:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.023 19:01:42 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:08.023 19:01:42 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:08.023 19:01:42 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:08.284 request: 00:07:08.284 { 00:07:08.284 "method": "env_dpdk_get_mem_stats", 00:07:08.284 "req_id": 1 00:07:08.284 } 00:07:08.284 Got JSON-RPC error response 00:07:08.284 response: 00:07:08.284 { 00:07:08.284 "code": -32601, 00:07:08.284 "message": "Method not found" 00:07:08.284 } 00:07:08.284 19:01:42 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:08.284 19:01:42 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:08.284 19:01:42 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:08.284 19:01:42 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:08.284 19:01:42 app_cmdline -- app/cmdline.sh@1 -- # killprocess 127476 00:07:08.284 19:01:42 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 127476 ']' 00:07:08.284 19:01:42 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 127476 00:07:08.284 19:01:42 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:08.284 19:01:42 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.284 19:01:42 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 127476 00:07:08.284 19:01:42 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.284 19:01:42 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.284 19:01:42 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 127476' 00:07:08.284 killing process with pid 127476 00:07:08.284 19:01:42 app_cmdline -- common/autotest_common.sh@973 -- # kill 127476 00:07:08.284 19:01:42 app_cmdline -- common/autotest_common.sh@978 -- # wait 127476 00:07:08.545 00:07:08.545 real 0m1.338s 00:07:08.545 user 0m1.527s 00:07:08.545 sys 0m0.517s 00:07:08.545 19:01:42 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.545 19:01:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:08.545 ************************************ 00:07:08.545 END TEST app_cmdline 00:07:08.545 ************************************ 00:07:08.545 19:01:42 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:08.545 19:01:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.545 19:01:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.545 19:01:42 -- common/autotest_common.sh@10 -- # set +x 00:07:08.545 ************************************ 00:07:08.545 START TEST version 00:07:08.545 ************************************ 00:07:08.545 19:01:42 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:08.806 * Looking for test storage... 00:07:08.806 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:08.806 19:01:42 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:08.806 19:01:42 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:08.806 19:01:42 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:08.806 19:01:43 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:08.806 19:01:43 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.806 19:01:43 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.806 19:01:43 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.806 19:01:43 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.806 19:01:43 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.806 19:01:43 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.806 19:01:43 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.806 19:01:43 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.806 19:01:43 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.806 19:01:43 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.806 19:01:43 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.806 19:01:43 version -- scripts/common.sh@344 -- # case "$op" in 00:07:08.806 19:01:43 version -- scripts/common.sh@345 -- # : 1 00:07:08.806 19:01:43 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.806 19:01:43 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.806 19:01:43 version -- scripts/common.sh@365 -- # decimal 1 00:07:08.806 19:01:43 version -- scripts/common.sh@353 -- # local d=1 00:07:08.806 19:01:43 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.806 19:01:43 version -- scripts/common.sh@355 -- # echo 1 00:07:08.806 19:01:43 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.806 19:01:43 version -- scripts/common.sh@366 -- # decimal 2 00:07:08.806 19:01:43 version -- scripts/common.sh@353 -- # local d=2 00:07:08.806 19:01:43 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.806 19:01:43 version -- scripts/common.sh@355 -- # echo 2 00:07:08.806 19:01:43 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.806 19:01:43 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.806 19:01:43 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.806 19:01:43 version -- scripts/common.sh@368 -- # return 0 00:07:08.806 19:01:43 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.806 19:01:43 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:08.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.806 --rc genhtml_branch_coverage=1 00:07:08.806 --rc genhtml_function_coverage=1 00:07:08.806 --rc genhtml_legend=1 00:07:08.806 --rc geninfo_all_blocks=1 00:07:08.806 --rc geninfo_unexecuted_blocks=1 00:07:08.806 00:07:08.806 ' 00:07:08.806 19:01:43 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:08.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.807 --rc genhtml_branch_coverage=1 00:07:08.807 --rc genhtml_function_coverage=1 00:07:08.807 --rc genhtml_legend=1 00:07:08.807 --rc geninfo_all_blocks=1 00:07:08.807 --rc geninfo_unexecuted_blocks=1 00:07:08.807 00:07:08.807 ' 00:07:08.807 19:01:43 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:08.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.807 --rc genhtml_branch_coverage=1 00:07:08.807 --rc genhtml_function_coverage=1 00:07:08.807 --rc genhtml_legend=1 00:07:08.807 --rc geninfo_all_blocks=1 00:07:08.807 --rc geninfo_unexecuted_blocks=1 00:07:08.807 00:07:08.807 ' 00:07:08.807 19:01:43 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:08.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.807 --rc genhtml_branch_coverage=1 00:07:08.807 --rc genhtml_function_coverage=1 00:07:08.807 --rc genhtml_legend=1 00:07:08.807 --rc geninfo_all_blocks=1 00:07:08.807 --rc geninfo_unexecuted_blocks=1 00:07:08.807 00:07:08.807 ' 00:07:08.807 19:01:43 version -- app/version.sh@17 -- # get_header_version major 00:07:08.807 19:01:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:08.807 19:01:43 version -- app/version.sh@14 -- # cut -f2 00:07:08.807 19:01:43 version -- app/version.sh@14 -- # tr -d '"' 00:07:08.807 19:01:43 version -- app/version.sh@17 -- # major=25 00:07:08.807 19:01:43 version -- app/version.sh@18 -- # get_header_version minor 00:07:08.807 19:01:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:08.807 19:01:43 version -- app/version.sh@14 -- # cut -f2 00:07:08.807 19:01:43 version -- app/version.sh@14 -- # tr -d '"' 00:07:08.807 19:01:43 version -- app/version.sh@18 -- # minor=1 00:07:08.807 19:01:43 version -- app/version.sh@19 -- # get_header_version patch 00:07:08.807 19:01:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:08.807 19:01:43 version -- app/version.sh@14 -- # cut -f2 00:07:08.807 19:01:43 version -- app/version.sh@14 -- # tr -d '"' 00:07:08.807 19:01:43 version -- app/version.sh@19 -- # patch=0 00:07:08.807 19:01:43 version -- app/version.sh@20 -- # get_header_version suffix 00:07:08.807 19:01:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:08.807 19:01:43 version -- app/version.sh@14 -- # cut -f2 00:07:08.807 19:01:43 version -- app/version.sh@14 -- # tr -d '"' 00:07:08.807 19:01:43 version -- app/version.sh@20 -- # suffix=-pre 00:07:08.807 19:01:43 version -- app/version.sh@22 -- # version=25.1 00:07:08.807 19:01:43 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:08.807 19:01:43 version -- app/version.sh@28 -- # version=25.1rc0 00:07:08.807 19:01:43 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:08.807 19:01:43 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:08.807 19:01:43 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:08.807 19:01:43 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:08.807 00:07:08.807 real 0m0.276s 00:07:08.807 user 0m0.158s 00:07:08.807 sys 0m0.174s 00:07:08.807 19:01:43 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.807 19:01:43 version -- common/autotest_common.sh@10 -- # set +x 00:07:08.807 ************************************ 00:07:08.807 END TEST version 00:07:08.807 ************************************ 00:07:09.068 19:01:43 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:09.068 19:01:43 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:09.068 19:01:43 -- spdk/autotest.sh@194 -- # uname -s 00:07:09.068 19:01:43 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:09.068 19:01:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:09.068 19:01:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:09.068 19:01:43 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:09.068 19:01:43 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:09.068 19:01:43 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:09.068 19:01:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:09.068 19:01:43 -- common/autotest_common.sh@10 -- # set +x 00:07:09.068 19:01:43 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:09.068 19:01:43 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:09.068 19:01:43 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:09.068 19:01:43 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:09.068 19:01:43 -- spdk/autotest.sh@280 -- # '[' rdma = rdma ']' 00:07:09.068 19:01:43 -- spdk/autotest.sh@281 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:09.068 19:01:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:09.068 19:01:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.068 19:01:43 -- common/autotest_common.sh@10 -- # set +x 00:07:09.068 ************************************ 00:07:09.068 START TEST nvmf_rdma 00:07:09.068 ************************************ 00:07:09.068 19:01:43 nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:09.068 * Looking for test storage... 00:07:09.069 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:09.069 19:01:43 nvmf_rdma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:09.069 19:01:43 nvmf_rdma -- common/autotest_common.sh@1711 -- # lcov --version 00:07:09.069 19:01:43 nvmf_rdma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:09.329 19:01:43 nvmf_rdma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:09.329 19:01:43 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.329 19:01:43 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.329 19:01:43 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.329 19:01:43 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.329 19:01:43 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.329 19:01:43 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.329 19:01:43 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.329 19:01:43 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.329 19:01:43 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.329 19:01:43 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.329 19:01:43 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.329 19:01:43 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:07:09.329 19:01:43 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:07:09.329 19:01:43 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.329 19:01:43 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.329 19:01:43 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:07:09.329 19:01:43 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:07:09.329 19:01:43 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.329 19:01:43 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:07:09.330 19:01:43 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.330 19:01:43 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:07:09.330 19:01:43 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:07:09.330 19:01:43 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.330 19:01:43 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:07:09.330 19:01:43 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.330 19:01:43 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.330 19:01:43 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.330 19:01:43 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:07:09.330 19:01:43 nvmf_rdma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.330 19:01:43 nvmf_rdma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:09.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.330 --rc genhtml_branch_coverage=1 00:07:09.330 --rc genhtml_function_coverage=1 00:07:09.330 --rc genhtml_legend=1 00:07:09.330 --rc geninfo_all_blocks=1 00:07:09.330 --rc geninfo_unexecuted_blocks=1 00:07:09.330 00:07:09.330 ' 00:07:09.330 19:01:43 nvmf_rdma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:09.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.330 --rc genhtml_branch_coverage=1 00:07:09.330 --rc genhtml_function_coverage=1 00:07:09.330 --rc genhtml_legend=1 00:07:09.330 --rc geninfo_all_blocks=1 00:07:09.330 --rc geninfo_unexecuted_blocks=1 00:07:09.330 00:07:09.330 ' 00:07:09.330 19:01:43 nvmf_rdma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:09.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.330 --rc genhtml_branch_coverage=1 00:07:09.330 --rc genhtml_function_coverage=1 00:07:09.330 --rc genhtml_legend=1 00:07:09.330 --rc geninfo_all_blocks=1 00:07:09.330 --rc geninfo_unexecuted_blocks=1 00:07:09.330 00:07:09.330 ' 00:07:09.330 19:01:43 nvmf_rdma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:09.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.330 --rc genhtml_branch_coverage=1 00:07:09.330 --rc genhtml_function_coverage=1 00:07:09.330 --rc genhtml_legend=1 00:07:09.330 --rc geninfo_all_blocks=1 00:07:09.330 --rc geninfo_unexecuted_blocks=1 00:07:09.330 00:07:09.330 ' 00:07:09.330 19:01:43 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:07:09.330 19:01:43 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:09.330 19:01:43 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:07:09.330 19:01:43 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:09.330 19:01:43 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.330 19:01:43 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:09.330 ************************************ 00:07:09.330 START TEST nvmf_target_core 00:07:09.330 ************************************ 00:07:09.330 19:01:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:07:09.330 * Looking for test storage... 00:07:09.330 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:09.330 19:01:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:09.330 19:01:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:07:09.330 19:01:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:09.591 19:01:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:09.591 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.591 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.591 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.591 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.591 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.591 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.591 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.591 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.591 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.591 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.591 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:09.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.592 --rc genhtml_branch_coverage=1 00:07:09.592 --rc genhtml_function_coverage=1 00:07:09.592 --rc genhtml_legend=1 00:07:09.592 --rc geninfo_all_blocks=1 00:07:09.592 --rc geninfo_unexecuted_blocks=1 00:07:09.592 00:07:09.592 ' 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:09.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.592 --rc genhtml_branch_coverage=1 00:07:09.592 --rc genhtml_function_coverage=1 00:07:09.592 --rc genhtml_legend=1 00:07:09.592 --rc geninfo_all_blocks=1 00:07:09.592 --rc geninfo_unexecuted_blocks=1 00:07:09.592 00:07:09.592 ' 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:09.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.592 --rc genhtml_branch_coverage=1 00:07:09.592 --rc genhtml_function_coverage=1 00:07:09.592 --rc genhtml_legend=1 00:07:09.592 --rc geninfo_all_blocks=1 00:07:09.592 --rc geninfo_unexecuted_blocks=1 00:07:09.592 00:07:09.592 ' 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:09.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.592 --rc genhtml_branch_coverage=1 00:07:09.592 --rc genhtml_function_coverage=1 00:07:09.592 --rc genhtml_legend=1 00:07:09.592 --rc geninfo_all_blocks=1 00:07:09.592 --rc geninfo_unexecuted_blocks=1 00:07:09.592 00:07:09.592 ' 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:09.592 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:09.592 ************************************ 00:07:09.592 START TEST nvmf_abort 00:07:09.592 ************************************ 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:07:09.592 * Looking for test storage... 00:07:09.592 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:07:09.592 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:09.854 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:09.854 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.854 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.854 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.854 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.854 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.854 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.854 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.854 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.854 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.854 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.854 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.854 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:09.854 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:09.854 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.854 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.854 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:09.854 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:09.854 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.854 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:09.854 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.854 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:09.854 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:09.854 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.854 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:09.854 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.854 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.854 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.854 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:09.854 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.854 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:09.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.854 --rc genhtml_branch_coverage=1 00:07:09.854 --rc genhtml_function_coverage=1 00:07:09.854 --rc genhtml_legend=1 00:07:09.854 --rc geninfo_all_blocks=1 00:07:09.854 --rc geninfo_unexecuted_blocks=1 00:07:09.854 00:07:09.854 ' 00:07:09.854 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:09.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.854 --rc genhtml_branch_coverage=1 00:07:09.854 --rc genhtml_function_coverage=1 00:07:09.854 --rc genhtml_legend=1 00:07:09.854 --rc geninfo_all_blocks=1 00:07:09.854 --rc geninfo_unexecuted_blocks=1 00:07:09.854 00:07:09.854 ' 00:07:09.854 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:09.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.854 --rc genhtml_branch_coverage=1 00:07:09.854 --rc genhtml_function_coverage=1 00:07:09.854 --rc genhtml_legend=1 00:07:09.854 --rc geninfo_all_blocks=1 00:07:09.854 --rc geninfo_unexecuted_blocks=1 00:07:09.854 00:07:09.854 ' 00:07:09.854 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:09.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.854 --rc genhtml_branch_coverage=1 00:07:09.854 --rc genhtml_function_coverage=1 00:07:09.854 --rc genhtml_legend=1 00:07:09.854 --rc geninfo_all_blocks=1 00:07:09.854 --rc geninfo_unexecuted_blocks=1 00:07:09.854 00:07:09.854 ' 00:07:09.854 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:09.854 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:09.854 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:09.854 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:09.854 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:09.854 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:09.854 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:09.854 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:09.854 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:09.854 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:09.855 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:09.855 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.994 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:17.994 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:17.994 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:17.994 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:17.994 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:17.994 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:17.994 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:17.994 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:17.994 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:17.994 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:17.994 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:17.994 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:17.994 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:17.994 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:17.994 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:17.994 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:17.994 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:17.994 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:17.995 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:17.995 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:17.995 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:17.995 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # rdma_device_init 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:17.995 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:17.995 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:17.995 altname enp217s0f0np0 00:07:17.995 altname ens818f0np0 00:07:17.995 inet 192.168.100.8/24 scope global mlx_0_0 00:07:17.995 valid_lft forever preferred_lft forever 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:17.995 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:17.995 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:17.995 altname enp217s0f1np1 00:07:17.995 altname ens818f1np1 00:07:17.995 inet 192.168.100.9/24 scope global mlx_0_1 00:07:17.995 valid_lft forever preferred_lft forever 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:17.995 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:17.996 192.168.100.9' 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:17.996 192.168.100.9' 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # head -n 1 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:17.996 192.168.100.9' 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # tail -n +2 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # head -n 1 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=131361 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 131361 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 131361 ']' 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.996 [2024-12-13 19:01:51.388184] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:17.996 [2024-12-13 19:01:51.388252] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.996 [2024-12-13 19:01:51.477535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.996 [2024-12-13 19:01:51.500869] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:17.996 [2024-12-13 19:01:51.500906] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:17.996 [2024-12-13 19:01:51.500915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:17.996 [2024-12-13 19:01:51.500923] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:17.996 [2024-12-13 19:01:51.500930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:17.996 [2024-12-13 19:01:51.502472] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.996 [2024-12-13 19:01:51.502581] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.996 [2024-12-13 19:01:51.502582] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.996 [2024-12-13 19:01:51.685487] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x140bc40/0x14100f0) succeed. 00:07:17.996 [2024-12-13 19:01:51.704612] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x140d1e0/0x1451790) succeed. 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.996 Malloc0 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.996 Delay0 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.996 [2024-12-13 19:01:51.880772] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.996 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:17.997 [2024-12-13 19:01:52.002705] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:19.908 Initializing NVMe Controllers 00:07:19.908 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:07:19.908 controller IO queue size 128 less than required 00:07:19.908 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:19.908 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:19.908 Initialization complete. Launching workers. 00:07:19.908 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 42943 00:07:19.908 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 43004, failed to submit 62 00:07:19.908 success 42944, unsuccessful 60, failed 0 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:19.908 rmmod nvme_rdma 00:07:19.908 rmmod nvme_fabrics 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 131361 ']' 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 131361 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 131361 ']' 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 131361 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 131361 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 131361' 00:07:19.908 killing process with pid 131361 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 131361 00:07:19.908 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 131361 00:07:20.169 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:20.169 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:20.169 00:07:20.169 real 0m10.679s 00:07:20.169 user 0m13.058s 00:07:20.169 sys 0m6.063s 00:07:20.169 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.169 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:20.169 ************************************ 00:07:20.169 END TEST nvmf_abort 00:07:20.169 ************************************ 00:07:20.169 19:01:54 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:07:20.169 19:01:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:20.169 19:01:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.169 19:01:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:20.430 ************************************ 00:07:20.430 START TEST nvmf_ns_hotplug_stress 00:07:20.430 ************************************ 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:07:20.430 * Looking for test storage... 00:07:20.430 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:20.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.430 --rc genhtml_branch_coverage=1 00:07:20.430 --rc genhtml_function_coverage=1 00:07:20.430 --rc genhtml_legend=1 00:07:20.430 --rc geninfo_all_blocks=1 00:07:20.430 --rc geninfo_unexecuted_blocks=1 00:07:20.430 00:07:20.430 ' 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:20.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.430 --rc genhtml_branch_coverage=1 00:07:20.430 --rc genhtml_function_coverage=1 00:07:20.430 --rc genhtml_legend=1 00:07:20.430 --rc geninfo_all_blocks=1 00:07:20.430 --rc geninfo_unexecuted_blocks=1 00:07:20.430 00:07:20.430 ' 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:20.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.430 --rc genhtml_branch_coverage=1 00:07:20.430 --rc genhtml_function_coverage=1 00:07:20.430 --rc genhtml_legend=1 00:07:20.430 --rc geninfo_all_blocks=1 00:07:20.430 --rc geninfo_unexecuted_blocks=1 00:07:20.430 00:07:20.430 ' 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:20.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.430 --rc genhtml_branch_coverage=1 00:07:20.430 --rc genhtml_function_coverage=1 00:07:20.430 --rc genhtml_legend=1 00:07:20.430 --rc geninfo_all_blocks=1 00:07:20.430 --rc geninfo_unexecuted_blocks=1 00:07:20.430 00:07:20.430 ' 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.430 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:20.431 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:20.431 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.691 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:20.691 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:20.691 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:20.691 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.691 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.691 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.691 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:20.691 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:20.691 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:20.691 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:28.842 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:28.842 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:28.843 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:28.843 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:28.843 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:28.843 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:28.843 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:28.843 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:28.843 altname enp217s0f0np0 00:07:28.843 altname ens818f0np0 00:07:28.843 inet 192.168.100.8/24 scope global mlx_0_0 00:07:28.843 valid_lft forever preferred_lft forever 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:28.843 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:28.843 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:28.843 altname enp217s0f1np1 00:07:28.843 altname ens818f1np1 00:07:28.843 inet 192.168.100.9/24 scope global mlx_0_1 00:07:28.843 valid_lft forever preferred_lft forever 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:28.843 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:28.844 192.168.100.9' 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:28.844 192.168.100.9' 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # head -n 1 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:28.844 192.168.100.9' 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # tail -n +2 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # head -n 1 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=135349 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 135349 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 135349 ']' 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:28.844 [2024-12-13 19:02:02.237294] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:28.844 [2024-12-13 19:02:02.237349] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.844 [2024-12-13 19:02:02.327193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.844 [2024-12-13 19:02:02.348965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.844 [2024-12-13 19:02:02.348999] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.844 [2024-12-13 19:02:02.349008] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.844 [2024-12-13 19:02:02.349017] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.844 [2024-12-13 19:02:02.349039] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.844 [2024-12-13 19:02:02.350515] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.844 [2024-12-13 19:02:02.350613] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.844 [2024-12-13 19:02:02.350614] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:28.844 [2024-12-13 19:02:02.680453] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2485c40/0x248a0f0) succeed. 00:07:28.844 [2024-12-13 19:02:02.689627] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24871e0/0x24cb790) succeed. 00:07:28.844 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:28.844 19:02:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:28.844 [2024-12-13 19:02:03.197735] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:29.104 19:02:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:29.104 19:02:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:29.362 Malloc0 00:07:29.362 19:02:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:29.621 Delay0 00:07:29.621 19:02:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.881 19:02:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:29.881 NULL1 00:07:29.881 19:02:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:30.141 19:02:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=135660 00:07:30.141 19:02:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:30.141 19:02:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:30.141 19:02:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.522 Read completed with error (sct=0, sc=11) 00:07:31.522 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.522 19:02:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.522 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.522 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.522 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.522 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.522 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.522 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.522 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.522 19:02:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:31.522 19:02:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:31.782 true 00:07:31.782 19:02:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:31.782 19:02:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.722 19:02:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.722 19:02:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:32.722 19:02:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:32.981 true 00:07:32.981 19:02:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:32.981 19:02:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.921 19:02:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.921 19:02:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:33.921 19:02:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:34.181 true 00:07:34.181 19:02:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:34.181 19:02:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.121 19:02:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.121 19:02:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:35.121 19:02:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:35.381 true 00:07:35.381 19:02:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:35.381 19:02:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.319 19:02:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.319 19:02:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:36.319 19:02:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:36.579 true 00:07:36.579 19:02:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:36.579 19:02:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.518 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.518 19:02:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.518 19:02:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:37.518 19:02:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:37.777 true 00:07:37.777 19:02:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:37.777 19:02:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.037 19:02:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.037 19:02:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:38.037 19:02:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:38.296 true 00:07:38.296 19:02:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:38.296 19:02:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.677 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.677 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:39.677 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:39.936 true 00:07:39.936 19:02:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:39.936 19:02:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.766 19:02:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.766 19:02:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:40.766 19:02:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:41.025 true 00:07:41.025 19:02:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:41.025 19:02:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.964 19:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.964 19:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:41.964 19:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:42.224 true 00:07:42.224 19:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:42.224 19:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.163 19:02:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.163 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.163 19:02:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:43.163 19:02:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:43.423 true 00:07:43.423 19:02:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:43.423 19:02:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.362 19:02:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.362 19:02:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:44.362 19:02:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:44.621 true 00:07:44.621 19:02:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:44.622 19:02:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.560 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.560 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:45.560 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:45.820 true 00:07:45.820 19:02:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:45.820 19:02:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.080 19:02:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.339 19:02:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:46.339 19:02:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:46.599 true 00:07:46.599 19:02:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:46.599 19:02:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.539 19:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.799 19:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:47.799 19:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:47.799 true 00:07:47.799 19:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:47.799 19:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.739 19:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.998 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.998 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.998 19:02:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:48.998 19:02:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:48.998 true 00:07:49.258 19:02:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:49.258 19:02:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.837 19:02:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.096 19:02:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:50.096 19:02:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:50.356 true 00:07:50.356 19:02:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:50.356 19:02:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.296 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.296 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:51.296 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:51.556 true 00:07:51.556 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:51.556 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.496 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.496 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.496 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.496 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.496 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.496 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.496 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.496 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.496 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.496 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:52.496 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:52.755 true 00:07:52.755 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:52.755 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.696 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.696 19:02:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.696 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.696 19:02:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:53.696 19:02:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:53.956 true 00:07:53.956 19:02:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:53.956 19:02:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.215 19:02:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.215 19:02:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:54.215 19:02:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:54.474 true 00:07:54.474 19:02:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:54.474 19:02:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.854 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.854 19:02:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.854 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.854 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.854 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.854 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.854 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.854 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.854 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.854 19:02:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:55.854 19:02:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:56.114 true 00:07:56.114 19:02:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:56.114 19:02:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.943 19:02:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.944 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.944 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.944 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.944 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.944 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.944 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.944 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.944 19:02:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:56.944 19:02:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:57.202 true 00:07:57.202 19:02:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:57.202 19:02:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.141 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.141 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:58.141 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:58.401 true 00:07:58.401 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:58.401 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.341 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.341 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:59.341 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:59.601 true 00:07:59.601 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:07:59.601 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.540 19:02:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.540 19:02:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:00.540 19:02:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:00.800 true 00:08:00.800 19:02:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:08:00.800 19:02:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.058 19:02:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.317 19:02:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:01.317 19:02:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:01.317 true 00:08:01.576 19:02:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:08:01.576 19:02:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.576 19:02:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.836 19:02:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:01.836 19:02:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:02.110 true 00:08:02.110 19:02:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:08:02.110 19:02:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.110 19:02:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.372 19:02:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:02.372 19:02:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:02.372 Initializing NVMe Controllers 00:08:02.372 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:02.372 Controller IO queue size 128, less than required. 00:08:02.372 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:02.372 Controller IO queue size 128, less than required. 00:08:02.372 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:02.372 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:02.372 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:02.372 Initialization complete. Launching workers. 00:08:02.372 ======================================================== 00:08:02.372 Latency(us) 00:08:02.372 Device Information : IOPS MiB/s Average min max 00:08:02.372 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5401.67 2.64 21425.46 796.15 1007112.20 00:08:02.372 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 35159.27 17.17 3640.37 1992.08 286440.09 00:08:02.372 ======================================================== 00:08:02.372 Total : 40560.93 19.81 6008.89 796.15 1007112.20 00:08:02.372 00:08:02.637 true 00:08:02.637 19:02:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 135660 00:08:02.637 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (135660) - No such process 00:08:02.637 19:02:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 135660 00:08:02.637 19:02:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.900 19:02:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.164 19:02:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:03.164 19:02:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:03.164 19:02:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:03.164 19:02:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:03.164 19:02:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:03.164 null0 00:08:03.164 19:02:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:03.164 19:02:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:03.164 19:02:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:03.432 null1 00:08:03.432 19:02:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:03.432 19:02:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:03.432 19:02:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:03.700 null2 00:08:03.700 19:02:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:03.700 19:02:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:03.700 19:02:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:03.700 null3 00:08:03.965 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:03.965 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:03.965 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:03.965 null4 00:08:03.965 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:03.965 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:03.965 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:04.231 null5 00:08:04.231 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:04.231 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:04.231 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:04.497 null6 00:08:04.497 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:04.497 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:04.497 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:04.497 null7 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 141874 141875 141877 141879 141881 141883 141885 141887 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.766 19:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.766 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.766 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.767 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.767 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.767 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.767 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.767 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.767 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:05.036 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.036 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.036 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:05.036 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.036 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.036 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:05.036 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.036 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.037 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.037 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.037 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.037 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:05.037 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.037 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.037 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:05.037 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.037 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.037 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:05.037 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.037 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.037 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:05.037 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.037 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.037 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:05.308 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.308 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:05.308 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:05.308 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.308 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:05.308 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:05.308 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:05.308 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.596 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.597 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:05.597 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:05.597 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:05.597 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:05.863 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:05.863 19:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:05.863 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.863 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.863 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:05.863 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.863 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.863 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:05.863 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.863 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.863 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:05.863 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.863 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.863 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:05.863 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.863 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.863 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.863 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.863 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.864 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:05.864 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.864 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.864 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:05.864 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.864 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.864 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:06.146 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.146 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.146 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:06.146 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:06.146 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.146 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:06.146 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:06.146 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:06.470 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:06.734 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.734 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.734 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.734 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.734 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.734 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.734 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.734 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.734 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.734 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.734 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.734 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:06.734 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.734 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.734 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.734 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:06.734 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.734 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.734 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.734 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.734 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:06.734 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.734 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.734 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:07.016 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.016 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:07.016 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:07.017 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:07.017 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:07.017 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:07.017 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:07.017 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:07.017 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.017 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.017 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:07.017 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.017 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.017 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:07.017 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.017 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.017 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:07.017 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.017 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.017 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:07.288 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.288 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.288 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:07.288 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.288 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.288 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:07.288 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.288 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.288 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.288 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.289 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:07.289 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:07.289 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.289 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:07.289 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:07.289 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:07.289 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:07.289 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:07.289 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:07.289 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:07.555 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.555 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.555 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:07.555 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.555 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.555 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:07.555 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.555 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.555 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:07.555 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.555 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.555 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:07.555 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.555 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.555 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:07.555 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.555 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.555 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:07.555 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.555 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.555 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:07.555 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.555 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.555 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:07.824 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:07.824 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:07.824 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:07.824 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:07.824 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.824 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:07.824 19:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:07.824 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:07.824 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.824 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.824 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:07.825 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.825 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.825 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:07.825 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.825 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.825 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:07.825 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.825 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.825 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:07.825 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.825 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.825 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:07.825 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.825 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.825 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:07.825 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.825 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.825 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:07.825 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.825 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.825 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:08.096 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:08.096 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:08.096 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.096 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:08.096 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:08.096 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:08.096 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:08.096 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:08.363 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.363 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.363 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:08.363 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.363 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.363 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:08.363 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.363 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.363 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:08.363 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.363 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.363 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:08.363 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.363 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.363 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:08.363 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.363 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.363 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:08.363 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.363 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.363 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:08.363 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.363 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.363 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:08.631 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.632 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:08.632 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:08.632 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:08.632 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:08.632 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:08.632 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:08.632 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:08.632 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.632 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.632 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.632 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.632 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.632 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.905 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.905 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.905 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.905 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.905 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.905 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.905 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:08.906 rmmod nvme_rdma 00:08:08.906 rmmod nvme_fabrics 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 135349 ']' 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 135349 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 135349 ']' 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 135349 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 135349 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 135349' 00:08:08.906 killing process with pid 135349 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 135349 00:08:08.906 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 135349 00:08:09.175 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:09.175 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:09.175 00:08:09.175 real 0m48.816s 00:08:09.175 user 3m20.429s 00:08:09.175 sys 0m14.591s 00:08:09.175 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.175 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:09.175 ************************************ 00:08:09.175 END TEST nvmf_ns_hotplug_stress 00:08:09.175 ************************************ 00:08:09.175 19:02:43 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:08:09.175 19:02:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:09.175 19:02:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.175 19:02:43 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:09.175 ************************************ 00:08:09.175 START TEST nvmf_delete_subsystem 00:08:09.175 ************************************ 00:08:09.175 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:08:09.464 * Looking for test storage... 00:08:09.464 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:09.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.464 --rc genhtml_branch_coverage=1 00:08:09.464 --rc genhtml_function_coverage=1 00:08:09.464 --rc genhtml_legend=1 00:08:09.464 --rc geninfo_all_blocks=1 00:08:09.464 --rc geninfo_unexecuted_blocks=1 00:08:09.464 00:08:09.464 ' 00:08:09.464 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:09.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.464 --rc genhtml_branch_coverage=1 00:08:09.464 --rc genhtml_function_coverage=1 00:08:09.465 --rc genhtml_legend=1 00:08:09.465 --rc geninfo_all_blocks=1 00:08:09.465 --rc geninfo_unexecuted_blocks=1 00:08:09.465 00:08:09.465 ' 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:09.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.465 --rc genhtml_branch_coverage=1 00:08:09.465 --rc genhtml_function_coverage=1 00:08:09.465 --rc genhtml_legend=1 00:08:09.465 --rc geninfo_all_blocks=1 00:08:09.465 --rc geninfo_unexecuted_blocks=1 00:08:09.465 00:08:09.465 ' 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:09.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.465 --rc genhtml_branch_coverage=1 00:08:09.465 --rc genhtml_function_coverage=1 00:08:09.465 --rc genhtml_legend=1 00:08:09.465 --rc geninfo_all_blocks=1 00:08:09.465 --rc geninfo_unexecuted_blocks=1 00:08:09.465 00:08:09.465 ' 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:09.465 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:09.465 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:17.635 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:17.636 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:17.636 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:17.636 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:17.636 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # rdma_device_init 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:17.636 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:17.636 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:17.636 altname enp217s0f0np0 00:08:17.636 altname ens818f0np0 00:08:17.636 inet 192.168.100.8/24 scope global mlx_0_0 00:08:17.636 valid_lft forever preferred_lft forever 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:17.636 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:17.636 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:17.636 altname enp217s0f1np1 00:08:17.636 altname ens818f1np1 00:08:17.636 inet 192.168.100.9/24 scope global mlx_0_1 00:08:17.636 valid_lft forever preferred_lft forever 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:17.636 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:17.637 192.168.100.9' 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:17.637 192.168.100.9' 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # head -n 1 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:17.637 192.168.100.9' 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # tail -n +2 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # head -n 1 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=146294 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 146294 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 146294 ']' 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.637 19:02:50 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:17.637 [2024-12-13 19:02:51.042945] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:17.637 [2024-12-13 19:02:51.043006] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.637 [2024-12-13 19:02:51.135731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:17.637 [2024-12-13 19:02:51.157292] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.637 [2024-12-13 19:02:51.157325] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.637 [2024-12-13 19:02:51.157335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.637 [2024-12-13 19:02:51.157344] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.637 [2024-12-13 19:02:51.157351] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.637 [2024-12-13 19:02:51.158505] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.637 [2024-12-13 19:02:51.158506] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:17.637 [2024-12-13 19:02:51.331279] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc18da0/0xc1d250) succeed. 00:08:17.637 [2024-12-13 19:02:51.339975] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc1a2a0/0xc5e8f0) succeed. 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:17.637 [2024-12-13 19:02:51.418991] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:17.637 NULL1 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:17.637 Delay0 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=146320 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:17.637 19:02:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:17.637 [2024-12-13 19:02:51.552069] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:19.549 19:02:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:19.549 19:02:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.549 19:02:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.488 NVMe io qpair process completion error 00:08:20.488 NVMe io qpair process completion error 00:08:20.488 NVMe io qpair process completion error 00:08:20.489 NVMe io qpair process completion error 00:08:20.489 NVMe io qpair process completion error 00:08:20.489 NVMe io qpair process completion error 00:08:20.489 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.489 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:20.489 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 146320 00:08:20.489 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:21.058 19:02:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:21.058 19:02:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 146320 00:08:21.058 19:02:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 starting I/O failed: -6 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 Read completed with error (sct=0, sc=8) 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.319 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 starting I/O failed: -6 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Write completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.320 Read completed with error (sct=0, sc=8) 00:08:21.321 Write completed with error (sct=0, sc=8) 00:08:21.321 Write completed with error (sct=0, sc=8) 00:08:21.321 Read completed with error (sct=0, sc=8) 00:08:21.321 Read completed with error (sct=0, sc=8) 00:08:21.321 Read completed with error (sct=0, sc=8) 00:08:21.321 Read completed with error (sct=0, sc=8) 00:08:21.321 Write completed with error (sct=0, sc=8) 00:08:21.321 Write completed with error (sct=0, sc=8) 00:08:21.321 Read completed with error (sct=0, sc=8) 00:08:21.321 Read completed with error (sct=0, sc=8) 00:08:21.321 Write completed with error (sct=0, sc=8) 00:08:21.321 Read completed with error (sct=0, sc=8) 00:08:21.321 Write completed with error (sct=0, sc=8) 00:08:21.321 Read completed with error (sct=0, sc=8) 00:08:21.321 Write completed with error (sct=0, sc=8) 00:08:21.321 Read completed with error (sct=0, sc=8) 00:08:21.321 Read completed with error (sct=0, sc=8) 00:08:21.321 Initializing NVMe Controllers 00:08:21.321 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:21.321 Controller IO queue size 128, less than required. 00:08:21.321 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:21.321 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:21.321 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:21.321 Initialization complete. Launching workers. 00:08:21.321 ======================================================== 00:08:21.321 Latency(us) 00:08:21.321 Device Information : IOPS MiB/s Average min max 00:08:21.321 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.59 0.04 1593424.59 1000078.73 2968822.91 00:08:21.321 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.59 0.04 1591869.29 1000123.67 2966940.30 00:08:21.321 ======================================================== 00:08:21.321 Total : 161.19 0.08 1592646.94 1000078.73 2968822.91 00:08:21.321 00:08:21.321 19:02:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:21.321 19:02:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 146320 00:08:21.321 19:02:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:21.321 [2024-12-13 19:02:55.662403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:08:21.321 [2024-12-13 19:02:55.662444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:08:21.321 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 146320 00:08:21.892 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (146320) - No such process 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 146320 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 146320 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 146320 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.892 [2024-12-13 19:02:56.179275] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=147131 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 147131 00:08:21.892 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:22.152 [2024-12-13 19:02:56.286185] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:22.412 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:22.412 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 147131 00:08:22.412 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:22.982 19:02:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:22.982 19:02:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 147131 00:08:22.982 19:02:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:23.552 19:02:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:23.552 19:02:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 147131 00:08:23.552 19:02:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:24.122 19:02:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:24.122 19:02:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 147131 00:08:24.122 19:02:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:24.382 19:02:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:24.382 19:02:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 147131 00:08:24.382 19:02:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:24.951 19:02:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:24.951 19:02:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 147131 00:08:24.951 19:02:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:25.521 19:02:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:25.521 19:02:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 147131 00:08:25.521 19:02:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:26.091 19:03:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:26.091 19:03:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 147131 00:08:26.091 19:03:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:26.660 19:03:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:26.660 19:03:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 147131 00:08:26.660 19:03:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:26.919 19:03:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:26.919 19:03:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 147131 00:08:26.919 19:03:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:27.489 19:03:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:27.489 19:03:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 147131 00:08:27.489 19:03:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:28.059 19:03:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:28.059 19:03:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 147131 00:08:28.059 19:03:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:28.628 19:03:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:28.628 19:03:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 147131 00:08:28.628 19:03:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:29.198 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:29.198 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 147131 00:08:29.198 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:29.198 Initializing NVMe Controllers 00:08:29.198 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:29.198 Controller IO queue size 128, less than required. 00:08:29.198 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:29.198 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:29.198 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:29.198 Initialization complete. Launching workers. 00:08:29.198 ======================================================== 00:08:29.198 Latency(us) 00:08:29.198 Device Information : IOPS MiB/s Average min max 00:08:29.198 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002933.34 1000052.19 1008919.13 00:08:29.198 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1001539.60 1000059.79 1006066.48 00:08:29.198 ======================================================== 00:08:29.198 Total : 256.00 0.12 1002236.47 1000052.19 1008919.13 00:08:29.198 00:08:29.463 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:29.463 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 147131 00:08:29.463 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (147131) - No such process 00:08:29.463 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 147131 00:08:29.463 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:29.463 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:29.463 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:29.463 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:29.463 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:29.463 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:29.463 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:29.463 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:29.464 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:29.464 rmmod nvme_rdma 00:08:29.464 rmmod nvme_fabrics 00:08:29.464 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:29.464 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:29.464 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:29.464 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 146294 ']' 00:08:29.464 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 146294 00:08:29.464 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 146294 ']' 00:08:29.464 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 146294 00:08:29.464 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:29.734 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.734 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 146294 00:08:29.734 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:29.734 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:29.734 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 146294' 00:08:29.734 killing process with pid 146294 00:08:29.734 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 146294 00:08:29.734 19:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 146294 00:08:29.734 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:29.734 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:29.734 00:08:29.734 real 0m20.639s 00:08:29.734 user 0m49.333s 00:08:29.734 sys 0m6.726s 00:08:29.734 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.994 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.994 ************************************ 00:08:29.994 END TEST nvmf_delete_subsystem 00:08:29.994 ************************************ 00:08:29.994 19:03:04 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:08:29.994 19:03:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:29.994 19:03:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.994 19:03:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:29.994 ************************************ 00:08:29.994 START TEST nvmf_host_management 00:08:29.994 ************************************ 00:08:29.994 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:08:29.994 * Looking for test storage... 00:08:29.994 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:29.994 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:29.994 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:08:29.994 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:30.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.255 --rc genhtml_branch_coverage=1 00:08:30.255 --rc genhtml_function_coverage=1 00:08:30.255 --rc genhtml_legend=1 00:08:30.255 --rc geninfo_all_blocks=1 00:08:30.255 --rc geninfo_unexecuted_blocks=1 00:08:30.255 00:08:30.255 ' 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:30.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.255 --rc genhtml_branch_coverage=1 00:08:30.255 --rc genhtml_function_coverage=1 00:08:30.255 --rc genhtml_legend=1 00:08:30.255 --rc geninfo_all_blocks=1 00:08:30.255 --rc geninfo_unexecuted_blocks=1 00:08:30.255 00:08:30.255 ' 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:30.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.255 --rc genhtml_branch_coverage=1 00:08:30.255 --rc genhtml_function_coverage=1 00:08:30.255 --rc genhtml_legend=1 00:08:30.255 --rc geninfo_all_blocks=1 00:08:30.255 --rc geninfo_unexecuted_blocks=1 00:08:30.255 00:08:30.255 ' 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:30.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.255 --rc genhtml_branch_coverage=1 00:08:30.255 --rc genhtml_function_coverage=1 00:08:30.255 --rc genhtml_legend=1 00:08:30.255 --rc geninfo_all_blocks=1 00:08:30.255 --rc geninfo_unexecuted_blocks=1 00:08:30.255 00:08:30.255 ' 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:30.255 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:30.256 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:30.256 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:38.393 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:38.393 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:38.393 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:38.394 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:38.394 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # rdma_device_init 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:38.394 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:38.394 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:38.394 altname enp217s0f0np0 00:08:38.394 altname ens818f0np0 00:08:38.394 inet 192.168.100.8/24 scope global mlx_0_0 00:08:38.394 valid_lft forever preferred_lft forever 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:38.394 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:38.394 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:38.394 altname enp217s0f1np1 00:08:38.394 altname ens818f1np1 00:08:38.394 inet 192.168.100.9/24 scope global mlx_0_1 00:08:38.394 valid_lft forever preferred_lft forever 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:38.394 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:38.395 192.168.100.9' 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:38.395 192.168.100.9' 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # head -n 1 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:38.395 192.168.100.9' 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # tail -n +2 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # head -n 1 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=152457 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 152457 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 152457 ']' 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.395 19:03:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.395 [2024-12-13 19:03:11.786852] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:38.395 [2024-12-13 19:03:11.786913] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.395 [2024-12-13 19:03:11.880047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.395 [2024-12-13 19:03:11.902792] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.395 [2024-12-13 19:03:11.902830] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.395 [2024-12-13 19:03:11.902839] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.395 [2024-12-13 19:03:11.902848] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.395 [2024-12-13 19:03:11.902856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.395 [2024-12-13 19:03:11.904603] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.395 [2024-12-13 19:03:11.904710] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.395 [2024-12-13 19:03:11.904821] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.395 [2024-12-13 19:03:11.904822] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.395 [2024-12-13 19:03:12.074853] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x77e840/0x782cf0) succeed. 00:08:38.395 [2024-12-13 19:03:12.084571] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x77fe80/0x7c4390) succeed. 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.395 Malloc0 00:08:38.395 [2024-12-13 19:03:12.275441] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=152585 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 152585 /var/tmp/bdevperf.sock 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 152585 ']' 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:38.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:38.395 { 00:08:38.395 "params": { 00:08:38.395 "name": "Nvme$subsystem", 00:08:38.395 "trtype": "$TEST_TRANSPORT", 00:08:38.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:38.395 "adrfam": "ipv4", 00:08:38.395 "trsvcid": "$NVMF_PORT", 00:08:38.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:38.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:38.395 "hdgst": ${hdgst:-false}, 00:08:38.395 "ddgst": ${ddgst:-false} 00:08:38.395 }, 00:08:38.395 "method": "bdev_nvme_attach_controller" 00:08:38.395 } 00:08:38.395 EOF 00:08:38.395 )") 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:38.395 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:38.396 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:38.396 "params": { 00:08:38.396 "name": "Nvme0", 00:08:38.396 "trtype": "rdma", 00:08:38.396 "traddr": "192.168.100.8", 00:08:38.396 "adrfam": "ipv4", 00:08:38.396 "trsvcid": "4420", 00:08:38.396 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:38.396 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:38.396 "hdgst": false, 00:08:38.396 "ddgst": false 00:08:38.396 }, 00:08:38.396 "method": "bdev_nvme_attach_controller" 00:08:38.396 }' 00:08:38.396 [2024-12-13 19:03:12.380981] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:38.396 [2024-12-13 19:03:12.381039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152585 ] 00:08:38.396 [2024-12-13 19:03:12.475200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.396 [2024-12-13 19:03:12.497400] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.396 Running I/O for 10 seconds... 00:08:38.396 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.396 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:38.396 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:38.396 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.396 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.396 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.396 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:38.396 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:38.396 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:38.396 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:38.396 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:38.396 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:38.396 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:38.396 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:38.396 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:38.396 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:38.396 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.396 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.656 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.656 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=171 00:08:38.656 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 171 -ge 100 ']' 00:08:38.656 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:38.656 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:38.656 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:38.656 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:38.656 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.656 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.656 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.656 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:38.656 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.656 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.656 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.656 19:03:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:39.595 256.00 IOPS, 16.00 MiB/s [2024-12-13T18:03:13.973Z] [2024-12-13 19:03:13.799885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d4fb00 len:0x10000 key:0x182000 00:08:39.595 [2024-12-13 19:03:13.799916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.595 [2024-12-13 19:03:13.799933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d3fa80 len:0x10000 key:0x182000 00:08:39.595 [2024-12-13 19:03:13.799942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.595 [2024-12-13 19:03:13.799958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d2fa00 len:0x10000 key:0x182000 00:08:39.595 [2024-12-13 19:03:13.799967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.595 [2024-12-13 19:03:13.799978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d1f980 len:0x10000 key:0x182000 00:08:39.595 [2024-12-13 19:03:13.799987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.595 [2024-12-13 19:03:13.799999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d0f900 len:0x10000 key:0x182000 00:08:39.595 [2024-12-13 19:03:13.800008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.595 [2024-12-13 19:03:13.800018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cff880 len:0x10000 key:0x182000 00:08:39.595 [2024-12-13 19:03:13.800027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.595 [2024-12-13 19:03:13.800038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cef800 len:0x10000 key:0x182000 00:08:39.595 [2024-12-13 19:03:13.800051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.595 [2024-12-13 19:03:13.800061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cdf780 len:0x10000 key:0x182000 00:08:39.595 [2024-12-13 19:03:13.800071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.595 [2024-12-13 19:03:13.800081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ccf700 len:0x10000 key:0x182000 00:08:39.595 [2024-12-13 19:03:13.800090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.595 [2024-12-13 19:03:13.800100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cbf680 len:0x10000 key:0x182000 00:08:39.595 [2024-12-13 19:03:13.800109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.595 [2024-12-13 19:03:13.800120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000caf600 len:0x10000 key:0x182000 00:08:39.595 [2024-12-13 19:03:13.800129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.595 [2024-12-13 19:03:13.800140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c9f580 len:0x10000 key:0x182000 00:08:39.595 [2024-12-13 19:03:13.800151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.595 [2024-12-13 19:03:13.800162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c8f500 len:0x10000 key:0x182000 00:08:39.595 [2024-12-13 19:03:13.800171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.595 [2024-12-13 19:03:13.800183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c7f480 len:0x10000 key:0x182000 00:08:39.595 [2024-12-13 19:03:13.800192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.595 [2024-12-13 19:03:13.800202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c6f400 len:0x10000 key:0x182000 00:08:39.595 [2024-12-13 19:03:13.800211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.595 [2024-12-13 19:03:13.800222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c5f380 len:0x10000 key:0x182000 00:08:39.595 [2024-12-13 19:03:13.800231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.595 [2024-12-13 19:03:13.800242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c4f300 len:0x10000 key:0x182000 00:08:39.595 [2024-12-13 19:03:13.800250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.595 [2024-12-13 19:03:13.800261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c3f280 len:0x10000 key:0x182000 00:08:39.595 [2024-12-13 19:03:13.800270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.595 [2024-12-13 19:03:13.800280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c2f200 len:0x10000 key:0x182000 00:08:39.595 [2024-12-13 19:03:13.800289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.595 [2024-12-13 19:03:13.800299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c1f180 len:0x10000 key:0x182000 00:08:39.595 [2024-12-13 19:03:13.800309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.595 [2024-12-13 19:03:13.800319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c0f100 len:0x10000 key:0x182000 00:08:39.595 [2024-12-13 19:03:13.800328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.595 [2024-12-13 19:03:13.800339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ff0000 len:0x10000 key:0x181f00 00:08:39.595 [2024-12-13 19:03:13.800348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.595 [2024-12-13 19:03:13.800359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000fdff80 len:0x10000 key:0x181f00 00:08:39.595 [2024-12-13 19:03:13.800368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.595 [2024-12-13 19:03:13.800378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bf0000 len:0x10000 key:0x182100 00:08:39.595 [2024-12-13 19:03:13.800387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008b69000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008bab000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008bcc000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008bed000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008c0e000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008c2f000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000088d5000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000088b4000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009fc7000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009fa6000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009f85000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009f64000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009f43000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009f22000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009f01000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009ee0000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a2df000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a2be000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a29d000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a27c000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a25b000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a23a000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a219000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a1f8000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a1d7000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a1b6000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a195000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a174000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a153000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a132000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.800986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a111000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.800995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.801005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a0f0000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.801014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.801024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a4ef000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.801033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.801048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a4ce000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.801058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.801069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a4ad000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.801077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.801088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a48c000 len:0x10000 key:0x182b00 00:08:39.596 [2024-12-13 19:03:13.801097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.596 [2024-12-13 19:03:13.801108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a46b000 len:0x10000 key:0x182b00 00:08:39.597 [2024-12-13 19:03:13.801118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.597 [2024-12-13 19:03:13.801129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a44a000 len:0x10000 key:0x182b00 00:08:39.597 [2024-12-13 19:03:13.801138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.597 [2024-12-13 19:03:13.801148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a429000 len:0x10000 key:0x182b00 00:08:39.597 [2024-12-13 19:03:13.801157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.597 [2024-12-13 19:03:13.801167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a408000 len:0x10000 key:0x182b00 00:08:39.597 [2024-12-13 19:03:13.801176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:17981 cdw0:e01f4000 sqhd:5ab0 p:1 m:0 dnr:0 00:08:39.597 [2024-12-13 19:03:13.803915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:39.597 task offset: 37888 on job bdev=Nvme0n1 fails 00:08:39.597 00:08:39.597 Latency(us) 00:08:39.597 [2024-12-13T18:03:13.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.597 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:39.597 Job: Nvme0n1 ended in about 1.12 seconds with error 00:08:39.597 Verification LBA range: start 0x0 length 0x400 00:08:39.597 Nvme0n1 : 1.12 228.37 14.27 57.09 0.00 222175.76 2333.08 1013343.85 00:08:39.597 [2024-12-13T18:03:13.975Z] =================================================================================================================== 00:08:39.597 [2024-12-13T18:03:13.975Z] Total : 228.37 14.27 57.09 0.00 222175.76 2333.08 1013343.85 00:08:39.597 [2024-12-13 19:03:13.806353] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:39.597 19:03:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 152585 00:08:39.597 19:03:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:39.597 19:03:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:39.597 19:03:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:39.597 19:03:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:39.597 19:03:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:39.597 19:03:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:39.597 19:03:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:39.597 { 00:08:39.597 "params": { 00:08:39.597 "name": "Nvme$subsystem", 00:08:39.597 "trtype": "$TEST_TRANSPORT", 00:08:39.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:39.597 "adrfam": "ipv4", 00:08:39.597 "trsvcid": "$NVMF_PORT", 00:08:39.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:39.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:39.597 "hdgst": ${hdgst:-false}, 00:08:39.597 "ddgst": ${ddgst:-false} 00:08:39.597 }, 00:08:39.597 "method": "bdev_nvme_attach_controller" 00:08:39.597 } 00:08:39.597 EOF 00:08:39.597 )") 00:08:39.597 19:03:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:39.597 19:03:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:39.597 19:03:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:39.597 19:03:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:39.597 "params": { 00:08:39.597 "name": "Nvme0", 00:08:39.597 "trtype": "rdma", 00:08:39.597 "traddr": "192.168.100.8", 00:08:39.597 "adrfam": "ipv4", 00:08:39.597 "trsvcid": "4420", 00:08:39.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:39.597 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:39.597 "hdgst": false, 00:08:39.597 "ddgst": false 00:08:39.597 }, 00:08:39.597 "method": "bdev_nvme_attach_controller" 00:08:39.597 }' 00:08:39.597 [2024-12-13 19:03:13.861998] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:39.597 [2024-12-13 19:03:13.862052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152836 ] 00:08:39.597 [2024-12-13 19:03:13.954615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.857 [2024-12-13 19:03:13.977305] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.857 Running I/O for 1 seconds... 00:08:40.798 3072.00 IOPS, 192.00 MiB/s 00:08:40.798 Latency(us) 00:08:40.798 [2024-12-13T18:03:15.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.798 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:40.798 Verification LBA range: start 0x0 length 0x400 00:08:40.798 Nvme0n1 : 1.01 3118.43 194.90 0.00 0.00 20114.33 989.59 39007.03 00:08:40.798 [2024-12-13T18:03:15.176Z] =================================================================================================================== 00:08:40.798 [2024-12-13T18:03:15.176Z] Total : 3118.43 194.90 0.00 0.00 20114.33 989.59 39007.03 00:08:41.057 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 152585 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:08:41.057 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:41.057 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:41.058 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:41.058 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:41.058 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:41.058 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:41.058 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:41.058 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:41.058 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:41.058 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:41.058 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:41.058 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:41.058 rmmod nvme_rdma 00:08:41.058 rmmod nvme_fabrics 00:08:41.058 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:41.058 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:41.058 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:41.058 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 152457 ']' 00:08:41.058 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 152457 00:08:41.058 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 152457 ']' 00:08:41.058 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 152457 00:08:41.058 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:41.058 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.058 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 152457 00:08:41.317 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:41.317 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:41.317 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 152457' 00:08:41.317 killing process with pid 152457 00:08:41.317 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 152457 00:08:41.317 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 152457 00:08:41.577 [2024-12-13 19:03:15.707885] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:41.577 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:41.577 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:41.577 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:41.577 00:08:41.577 real 0m11.536s 00:08:41.577 user 0m19.975s 00:08:41.577 sys 0m6.605s 00:08:41.577 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.577 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.577 ************************************ 00:08:41.577 END TEST nvmf_host_management 00:08:41.577 ************************************ 00:08:41.577 19:03:15 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:08:41.577 19:03:15 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:41.577 19:03:15 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.577 19:03:15 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:41.577 ************************************ 00:08:41.577 START TEST nvmf_lvol 00:08:41.577 ************************************ 00:08:41.577 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:08:41.577 * Looking for test storage... 00:08:41.577 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:41.577 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:41.577 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:08:41.577 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:41.837 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:41.837 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.837 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.837 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.837 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.837 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.837 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.837 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.837 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.837 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.837 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.837 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.837 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:41.837 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:41.837 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.837 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.837 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:41.837 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:41.837 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.837 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:41.837 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.837 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:41.837 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:41.837 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.837 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:41.837 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.837 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.837 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.837 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:41.837 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.837 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:41.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.837 --rc genhtml_branch_coverage=1 00:08:41.837 --rc genhtml_function_coverage=1 00:08:41.837 --rc genhtml_legend=1 00:08:41.837 --rc geninfo_all_blocks=1 00:08:41.837 --rc geninfo_unexecuted_blocks=1 00:08:41.837 00:08:41.837 ' 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:41.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.838 --rc genhtml_branch_coverage=1 00:08:41.838 --rc genhtml_function_coverage=1 00:08:41.838 --rc genhtml_legend=1 00:08:41.838 --rc geninfo_all_blocks=1 00:08:41.838 --rc geninfo_unexecuted_blocks=1 00:08:41.838 00:08:41.838 ' 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:41.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.838 --rc genhtml_branch_coverage=1 00:08:41.838 --rc genhtml_function_coverage=1 00:08:41.838 --rc genhtml_legend=1 00:08:41.838 --rc geninfo_all_blocks=1 00:08:41.838 --rc geninfo_unexecuted_blocks=1 00:08:41.838 00:08:41.838 ' 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:41.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.838 --rc genhtml_branch_coverage=1 00:08:41.838 --rc genhtml_function_coverage=1 00:08:41.838 --rc genhtml_legend=1 00:08:41.838 --rc geninfo_all_blocks=1 00:08:41.838 --rc geninfo_unexecuted_blocks=1 00:08:41.838 00:08:41.838 ' 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:41.838 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:41.838 19:03:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:50.000 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:50.000 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.000 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:50.001 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:50.001 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # rdma_device_init 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:50.001 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:50.001 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:50.001 altname enp217s0f0np0 00:08:50.001 altname ens818f0np0 00:08:50.001 inet 192.168.100.8/24 scope global mlx_0_0 00:08:50.001 valid_lft forever preferred_lft forever 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:50.001 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:50.001 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:50.001 altname enp217s0f1np1 00:08:50.001 altname ens818f1np1 00:08:50.001 inet 192.168.100.9/24 scope global mlx_0_1 00:08:50.001 valid_lft forever preferred_lft forever 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:50.001 192.168.100.9' 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:50.001 192.168.100.9' 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # head -n 1 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:50.001 192.168.100.9' 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # tail -n +2 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # head -n 1 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:50.001 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:50.002 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:50.002 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:50.002 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:50.002 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:50.002 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:50.002 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:50.002 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:50.002 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:50.002 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=156628 00:08:50.002 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:50.002 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 156628 00:08:50.002 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 156628 ']' 00:08:50.002 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.002 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:50.002 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.002 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:50.002 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:50.002 [2024-12-13 19:03:23.414284] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:50.002 [2024-12-13 19:03:23.414343] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.002 [2024-12-13 19:03:23.505471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:50.002 [2024-12-13 19:03:23.527277] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.002 [2024-12-13 19:03:23.527315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.002 [2024-12-13 19:03:23.527324] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.002 [2024-12-13 19:03:23.527333] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.002 [2024-12-13 19:03:23.527340] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.002 [2024-12-13 19:03:23.528795] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.002 [2024-12-13 19:03:23.528903] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.002 [2024-12-13 19:03:23.528904] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.002 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.002 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:50.002 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:50.002 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:50.002 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:50.002 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.002 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:50.002 [2024-12-13 19:03:23.859386] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x94f940/0x953df0) succeed. 00:08:50.002 [2024-12-13 19:03:23.868262] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x950ee0/0x995490) succeed. 00:08:50.002 19:03:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:50.002 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:50.002 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:50.262 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:50.262 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:50.262 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:50.522 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b412f0e2-f55c-461a-9698-9e1780f728d5 00:08:50.522 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b412f0e2-f55c-461a-9698-9e1780f728d5 lvol 20 00:08:50.781 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7e0e2883-6ea3-4857-8a0d-67a4c6d33425 00:08:50.781 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:51.040 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7e0e2883-6ea3-4857-8a0d-67a4c6d33425 00:08:51.299 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:51.299 [2024-12-13 19:03:25.604692] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:51.299 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:51.558 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=157059 00:08:51.558 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:51.559 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:52.497 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7e0e2883-6ea3-4857-8a0d-67a4c6d33425 MY_SNAPSHOT 00:08:52.757 19:03:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ec7c3a0c-d44a-4571-9244-0bbbfba236d0 00:08:52.757 19:03:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7e0e2883-6ea3-4857-8a0d-67a4c6d33425 30 00:08:53.017 19:03:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ec7c3a0c-d44a-4571-9244-0bbbfba236d0 MY_CLONE 00:08:53.277 19:03:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3e5f5401-ddbf-4d42-8345-45ef51b9a91f 00:08:53.277 19:03:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3e5f5401-ddbf-4d42-8345-45ef51b9a91f 00:08:53.537 19:03:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 157059 00:09:03.526 Initializing NVMe Controllers 00:09:03.526 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:09:03.526 Controller IO queue size 128, less than required. 00:09:03.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:03.526 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:03.526 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:03.526 Initialization complete. Launching workers. 00:09:03.526 ======================================================== 00:09:03.526 Latency(us) 00:09:03.526 Device Information : IOPS MiB/s Average min max 00:09:03.526 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16292.30 63.64 7857.59 2114.51 41243.94 00:09:03.526 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16246.80 63.46 7879.90 3469.62 36713.61 00:09:03.526 ======================================================== 00:09:03.526 Total : 32539.10 127.11 7868.73 2114.51 41243.94 00:09:03.526 00:09:03.526 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:03.526 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7e0e2883-6ea3-4857-8a0d-67a4c6d33425 00:09:03.526 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b412f0e2-f55c-461a-9698-9e1780f728d5 00:09:03.526 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:03.526 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:03.526 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:03.526 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:03.526 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:03.526 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:03.526 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:03.526 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:03.526 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:03.526 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:03.526 rmmod nvme_rdma 00:09:03.526 rmmod nvme_fabrics 00:09:03.526 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:03.526 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:03.526 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:03.526 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 156628 ']' 00:09:03.526 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 156628 00:09:03.526 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 156628 ']' 00:09:03.526 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 156628 00:09:03.526 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:03.526 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.526 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 156628 00:09:03.786 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.786 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.786 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 156628' 00:09:03.786 killing process with pid 156628 00:09:03.786 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 156628 00:09:03.786 19:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 156628 00:09:04.047 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:04.047 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:04.047 00:09:04.047 real 0m22.404s 00:09:04.047 user 1m10.836s 00:09:04.047 sys 0m6.783s 00:09:04.047 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.047 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:04.047 ************************************ 00:09:04.047 END TEST nvmf_lvol 00:09:04.047 ************************************ 00:09:04.047 19:03:38 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:09:04.047 19:03:38 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:04.047 19:03:38 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.047 19:03:38 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:04.047 ************************************ 00:09:04.047 START TEST nvmf_lvs_grow 00:09:04.047 ************************************ 00:09:04.047 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:09:04.047 * Looking for test storage... 00:09:04.047 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:04.047 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:04.047 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:09:04.047 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:04.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.307 --rc genhtml_branch_coverage=1 00:09:04.307 --rc genhtml_function_coverage=1 00:09:04.307 --rc genhtml_legend=1 00:09:04.307 --rc geninfo_all_blocks=1 00:09:04.307 --rc geninfo_unexecuted_blocks=1 00:09:04.307 00:09:04.307 ' 00:09:04.307 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:04.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.308 --rc genhtml_branch_coverage=1 00:09:04.308 --rc genhtml_function_coverage=1 00:09:04.308 --rc genhtml_legend=1 00:09:04.308 --rc geninfo_all_blocks=1 00:09:04.308 --rc geninfo_unexecuted_blocks=1 00:09:04.308 00:09:04.308 ' 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:04.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.308 --rc genhtml_branch_coverage=1 00:09:04.308 --rc genhtml_function_coverage=1 00:09:04.308 --rc genhtml_legend=1 00:09:04.308 --rc geninfo_all_blocks=1 00:09:04.308 --rc geninfo_unexecuted_blocks=1 00:09:04.308 00:09:04.308 ' 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:04.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.308 --rc genhtml_branch_coverage=1 00:09:04.308 --rc genhtml_function_coverage=1 00:09:04.308 --rc genhtml_legend=1 00:09:04.308 --rc geninfo_all_blocks=1 00:09:04.308 --rc geninfo_unexecuted_blocks=1 00:09:04.308 00:09:04.308 ' 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:04.308 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:04.308 19:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:12.448 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:12.448 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:12.448 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:12.448 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:12.448 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:12.448 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:12.448 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:12.448 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:12.448 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:12.448 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:12.448 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:12.448 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:12.448 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:12.448 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:12.448 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:12.448 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:12.448 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:12.449 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:12.449 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:12.449 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:12.449 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # rdma_device_init 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:12.449 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:12.449 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:12.449 altname enp217s0f0np0 00:09:12.449 altname ens818f0np0 00:09:12.449 inet 192.168.100.8/24 scope global mlx_0_0 00:09:12.449 valid_lft forever preferred_lft forever 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:12.449 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:12.449 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:12.449 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:12.449 altname enp217s0f1np1 00:09:12.449 altname ens818f1np1 00:09:12.449 inet 192.168.100.9/24 scope global mlx_0_1 00:09:12.449 valid_lft forever preferred_lft forever 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:12.450 192.168.100.9' 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:12.450 192.168.100.9' 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # head -n 1 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:12.450 192.168.100.9' 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # tail -n +2 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # head -n 1 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=162632 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 162632 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 162632 ']' 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.450 19:03:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:12.450 [2024-12-13 19:03:45.916293] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:12.450 [2024-12-13 19:03:45.916350] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.450 [2024-12-13 19:03:46.008728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.450 [2024-12-13 19:03:46.029847] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.450 [2024-12-13 19:03:46.029883] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.450 [2024-12-13 19:03:46.029892] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.450 [2024-12-13 19:03:46.029901] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.450 [2024-12-13 19:03:46.029908] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.450 [2024-12-13 19:03:46.030459] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.450 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.450 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:12.450 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:12.450 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:12.450 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:12.450 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.450 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:12.450 [2024-12-13 19:03:46.350676] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xeee380/0xef2830) succeed. 00:09:12.450 [2024-12-13 19:03:46.359190] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xeef7e0/0xf33ed0) succeed. 00:09:12.450 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:12.450 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:12.450 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.450 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:12.450 ************************************ 00:09:12.450 START TEST lvs_grow_clean 00:09:12.450 ************************************ 00:09:12.450 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:12.450 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:12.450 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:12.450 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:12.450 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:12.450 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:12.450 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:12.450 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:12.450 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:12.450 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:12.450 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:12.450 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:12.710 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b6779d1e-0c30-4b52-ada3-a9dece71420c 00:09:12.710 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6779d1e-0c30-4b52-ada3-a9dece71420c 00:09:12.710 19:03:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:12.710 19:03:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:12.710 19:03:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:12.710 19:03:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b6779d1e-0c30-4b52-ada3-a9dece71420c lvol 150 00:09:12.970 19:03:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f02cff78-42a3-4c28-b7f3-da2ac22d6159 00:09:12.970 19:03:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:12.970 19:03:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:13.229 [2024-12-13 19:03:47.425433] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:13.229 [2024-12-13 19:03:47.425476] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:13.229 true 00:09:13.229 19:03:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6779d1e-0c30-4b52-ada3-a9dece71420c 00:09:13.229 19:03:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:13.489 19:03:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:13.489 19:03:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:13.489 19:03:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f02cff78-42a3-4c28-b7f3-da2ac22d6159 00:09:13.748 19:03:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:14.007 [2024-12-13 19:03:48.167796] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:14.007 19:03:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:14.007 19:03:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=163176 00:09:14.007 19:03:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:14.007 19:03:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:14.007 19:03:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 163176 /var/tmp/bdevperf.sock 00:09:14.007 19:03:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 163176 ']' 00:09:14.007 19:03:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:14.007 19:03:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.007 19:03:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:14.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:14.008 19:03:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.008 19:03:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:14.267 [2024-12-13 19:03:48.407310] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:14.267 [2024-12-13 19:03:48.407365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163176 ] 00:09:14.267 [2024-12-13 19:03:48.497746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.267 [2024-12-13 19:03:48.520095] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.267 19:03:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.267 19:03:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:14.267 19:03:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:14.526 Nvme0n1 00:09:14.526 19:03:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:14.786 [ 00:09:14.786 { 00:09:14.786 "name": "Nvme0n1", 00:09:14.786 "aliases": [ 00:09:14.786 "f02cff78-42a3-4c28-b7f3-da2ac22d6159" 00:09:14.786 ], 00:09:14.786 "product_name": "NVMe disk", 00:09:14.786 "block_size": 4096, 00:09:14.786 "num_blocks": 38912, 00:09:14.786 "uuid": "f02cff78-42a3-4c28-b7f3-da2ac22d6159", 00:09:14.786 "numa_id": 1, 00:09:14.786 "assigned_rate_limits": { 00:09:14.786 "rw_ios_per_sec": 0, 00:09:14.786 "rw_mbytes_per_sec": 0, 00:09:14.786 "r_mbytes_per_sec": 0, 00:09:14.786 "w_mbytes_per_sec": 0 00:09:14.786 }, 00:09:14.786 "claimed": false, 00:09:14.786 "zoned": false, 00:09:14.786 "supported_io_types": { 00:09:14.786 "read": true, 00:09:14.786 "write": true, 00:09:14.786 "unmap": true, 00:09:14.786 "flush": true, 00:09:14.786 "reset": true, 00:09:14.786 "nvme_admin": true, 00:09:14.786 "nvme_io": true, 00:09:14.786 "nvme_io_md": false, 00:09:14.786 "write_zeroes": true, 00:09:14.786 "zcopy": false, 00:09:14.786 "get_zone_info": false, 00:09:14.786 "zone_management": false, 00:09:14.786 "zone_append": false, 00:09:14.786 "compare": true, 00:09:14.786 "compare_and_write": true, 00:09:14.786 "abort": true, 00:09:14.786 "seek_hole": false, 00:09:14.786 "seek_data": false, 00:09:14.786 "copy": true, 00:09:14.786 "nvme_iov_md": false 00:09:14.786 }, 00:09:14.786 "memory_domains": [ 00:09:14.786 { 00:09:14.786 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:09:14.786 "dma_device_type": 0 00:09:14.786 } 00:09:14.786 ], 00:09:14.786 "driver_specific": { 00:09:14.786 "nvme": [ 00:09:14.786 { 00:09:14.786 "trid": { 00:09:14.786 "trtype": "RDMA", 00:09:14.786 "adrfam": "IPv4", 00:09:14.786 "traddr": "192.168.100.8", 00:09:14.786 "trsvcid": "4420", 00:09:14.786 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:14.786 }, 00:09:14.786 "ctrlr_data": { 00:09:14.786 "cntlid": 1, 00:09:14.786 "vendor_id": "0x8086", 00:09:14.786 "model_number": "SPDK bdev Controller", 00:09:14.786 "serial_number": "SPDK0", 00:09:14.786 "firmware_revision": "25.01", 00:09:14.786 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:14.786 "oacs": { 00:09:14.786 "security": 0, 00:09:14.786 "format": 0, 00:09:14.786 "firmware": 0, 00:09:14.786 "ns_manage": 0 00:09:14.786 }, 00:09:14.786 "multi_ctrlr": true, 00:09:14.786 "ana_reporting": false 00:09:14.786 }, 00:09:14.786 "vs": { 00:09:14.786 "nvme_version": "1.3" 00:09:14.786 }, 00:09:14.786 "ns_data": { 00:09:14.786 "id": 1, 00:09:14.786 "can_share": true 00:09:14.786 } 00:09:14.786 } 00:09:14.786 ], 00:09:14.786 "mp_policy": "active_passive" 00:09:14.786 } 00:09:14.786 } 00:09:14.786 ] 00:09:14.786 19:03:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=163219 00:09:14.786 19:03:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:14.786 19:03:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:15.046 Running I/O for 10 seconds... 00:09:15.983 Latency(us) 00:09:15.983 [2024-12-13T18:03:50.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.983 Nvme0n1 : 1.00 34402.00 134.38 0.00 0.00 0.00 0.00 0.00 00:09:15.983 [2024-12-13T18:03:50.361Z] =================================================================================================================== 00:09:15.983 [2024-12-13T18:03:50.361Z] Total : 34402.00 134.38 0.00 0.00 0.00 0.00 0.00 00:09:15.983 00:09:16.921 19:03:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b6779d1e-0c30-4b52-ada3-a9dece71420c 00:09:16.921 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.921 Nvme0n1 : 2.00 34543.50 134.94 0.00 0.00 0.00 0.00 0.00 00:09:16.922 [2024-12-13T18:03:51.300Z] =================================================================================================================== 00:09:16.922 [2024-12-13T18:03:51.300Z] Total : 34543.50 134.94 0.00 0.00 0.00 0.00 0.00 00:09:16.922 00:09:16.922 true 00:09:16.922 19:03:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6779d1e-0c30-4b52-ada3-a9dece71420c 00:09:16.922 19:03:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:17.181 19:03:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:17.181 19:03:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:17.181 19:03:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 163219 00:09:18.119 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.119 Nvme0n1 : 3.00 34751.00 135.75 0.00 0.00 0.00 0.00 0.00 00:09:18.119 [2024-12-13T18:03:52.497Z] =================================================================================================================== 00:09:18.119 [2024-12-13T18:03:52.497Z] Total : 34751.00 135.75 0.00 0.00 0.00 0.00 0.00 00:09:18.119 00:09:19.057 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.057 Nvme0n1 : 4.00 34918.75 136.40 0.00 0.00 0.00 0.00 0.00 00:09:19.057 [2024-12-13T18:03:53.435Z] =================================================================================================================== 00:09:19.057 [2024-12-13T18:03:53.435Z] Total : 34918.75 136.40 0.00 0.00 0.00 0.00 0.00 00:09:19.057 00:09:19.994 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.994 Nvme0n1 : 5.00 35040.80 136.88 0.00 0.00 0.00 0.00 0.00 00:09:19.994 [2024-12-13T18:03:54.372Z] =================================================================================================================== 00:09:19.994 [2024-12-13T18:03:54.372Z] Total : 35040.80 136.88 0.00 0.00 0.00 0.00 0.00 00:09:19.994 00:09:20.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.931 Nvme0n1 : 6.00 35126.00 137.21 0.00 0.00 0.00 0.00 0.00 00:09:20.931 [2024-12-13T18:03:55.309Z] =================================================================================================================== 00:09:20.931 [2024-12-13T18:03:55.309Z] Total : 35126.00 137.21 0.00 0.00 0.00 0.00 0.00 00:09:20.931 00:09:21.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.868 Nvme0n1 : 7.00 35185.57 137.44 0.00 0.00 0.00 0.00 0.00 00:09:21.868 [2024-12-13T18:03:56.246Z] =================================================================================================================== 00:09:21.868 [2024-12-13T18:03:56.246Z] Total : 35185.57 137.44 0.00 0.00 0.00 0.00 0.00 00:09:21.868 00:09:23.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.247 Nvme0n1 : 8.00 35231.50 137.62 0.00 0.00 0.00 0.00 0.00 00:09:23.247 [2024-12-13T18:03:57.625Z] =================================================================================================================== 00:09:23.247 [2024-12-13T18:03:57.625Z] Total : 35231.50 137.62 0.00 0.00 0.00 0.00 0.00 00:09:23.247 00:09:24.185 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.185 Nvme0n1 : 9.00 35268.00 137.77 0.00 0.00 0.00 0.00 0.00 00:09:24.185 [2024-12-13T18:03:58.563Z] =================================================================================================================== 00:09:24.185 [2024-12-13T18:03:58.563Z] Total : 35268.00 137.77 0.00 0.00 0.00 0.00 0.00 00:09:24.185 00:09:25.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.123 Nvme0n1 : 10.00 35292.90 137.86 0.00 0.00 0.00 0.00 0.00 00:09:25.123 [2024-12-13T18:03:59.501Z] =================================================================================================================== 00:09:25.123 [2024-12-13T18:03:59.501Z] Total : 35292.90 137.86 0.00 0.00 0.00 0.00 0.00 00:09:25.123 00:09:25.123 00:09:25.123 Latency(us) 00:09:25.123 [2024-12-13T18:03:59.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.123 Nvme0n1 : 10.00 35293.86 137.87 0.00 0.00 3623.71 2477.26 11796.48 00:09:25.123 [2024-12-13T18:03:59.501Z] =================================================================================================================== 00:09:25.123 [2024-12-13T18:03:59.501Z] Total : 35293.86 137.87 0.00 0.00 3623.71 2477.26 11796.48 00:09:25.123 { 00:09:25.123 "results": [ 00:09:25.123 { 00:09:25.123 "job": "Nvme0n1", 00:09:25.123 "core_mask": "0x2", 00:09:25.123 "workload": "randwrite", 00:09:25.123 "status": "finished", 00:09:25.123 "queue_depth": 128, 00:09:25.123 "io_size": 4096, 00:09:25.123 "runtime": 10.003185, 00:09:25.123 "iops": 35293.85890593846, 00:09:25.123 "mibps": 137.86663635132211, 00:09:25.123 "io_failed": 0, 00:09:25.123 "io_timeout": 0, 00:09:25.123 "avg_latency_us": 3623.7117961178415, 00:09:25.123 "min_latency_us": 2477.2608, 00:09:25.123 "max_latency_us": 11796.48 00:09:25.123 } 00:09:25.123 ], 00:09:25.123 "core_count": 1 00:09:25.123 } 00:09:25.123 19:03:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 163176 00:09:25.123 19:03:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 163176 ']' 00:09:25.123 19:03:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 163176 00:09:25.123 19:03:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:25.123 19:03:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.123 19:03:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 163176 00:09:25.123 19:03:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:25.123 19:03:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:25.123 19:03:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 163176' 00:09:25.123 killing process with pid 163176 00:09:25.123 19:03:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 163176 00:09:25.123 Received shutdown signal, test time was about 10.000000 seconds 00:09:25.123 00:09:25.123 Latency(us) 00:09:25.123 [2024-12-13T18:03:59.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.123 [2024-12-13T18:03:59.501Z] =================================================================================================================== 00:09:25.123 [2024-12-13T18:03:59.501Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:25.123 19:03:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 163176 00:09:25.123 19:03:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:25.383 19:03:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:25.642 19:03:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6779d1e-0c30-4b52-ada3-a9dece71420c 00:09:25.642 19:03:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:25.902 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:25.902 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:25.902 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:25.902 [2024-12-13 19:04:00.230558] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:26.161 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6779d1e-0c30-4b52-ada3-a9dece71420c 00:09:26.161 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:26.161 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6779d1e-0c30-4b52-ada3-a9dece71420c 00:09:26.161 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:26.161 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:26.161 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:26.161 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:26.161 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:26.161 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:26.161 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:26.161 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:09:26.161 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6779d1e-0c30-4b52-ada3-a9dece71420c 00:09:26.161 request: 00:09:26.161 { 00:09:26.161 "uuid": "b6779d1e-0c30-4b52-ada3-a9dece71420c", 00:09:26.161 "method": "bdev_lvol_get_lvstores", 00:09:26.161 "req_id": 1 00:09:26.161 } 00:09:26.161 Got JSON-RPC error response 00:09:26.161 response: 00:09:26.161 { 00:09:26.161 "code": -19, 00:09:26.161 "message": "No such device" 00:09:26.161 } 00:09:26.161 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:26.161 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:26.161 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:26.161 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:26.161 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:26.420 aio_bdev 00:09:26.420 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f02cff78-42a3-4c28-b7f3-da2ac22d6159 00:09:26.420 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=f02cff78-42a3-4c28-b7f3-da2ac22d6159 00:09:26.420 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.420 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:26.420 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.421 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.421 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:26.680 19:04:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f02cff78-42a3-4c28-b7f3-da2ac22d6159 -t 2000 00:09:26.680 [ 00:09:26.680 { 00:09:26.680 "name": "f02cff78-42a3-4c28-b7f3-da2ac22d6159", 00:09:26.680 "aliases": [ 00:09:26.680 "lvs/lvol" 00:09:26.680 ], 00:09:26.680 "product_name": "Logical Volume", 00:09:26.680 "block_size": 4096, 00:09:26.680 "num_blocks": 38912, 00:09:26.680 "uuid": "f02cff78-42a3-4c28-b7f3-da2ac22d6159", 00:09:26.680 "assigned_rate_limits": { 00:09:26.680 "rw_ios_per_sec": 0, 00:09:26.680 "rw_mbytes_per_sec": 0, 00:09:26.680 "r_mbytes_per_sec": 0, 00:09:26.680 "w_mbytes_per_sec": 0 00:09:26.680 }, 00:09:26.680 "claimed": false, 00:09:26.680 "zoned": false, 00:09:26.680 "supported_io_types": { 00:09:26.680 "read": true, 00:09:26.680 "write": true, 00:09:26.680 "unmap": true, 00:09:26.680 "flush": false, 00:09:26.680 "reset": true, 00:09:26.680 "nvme_admin": false, 00:09:26.680 "nvme_io": false, 00:09:26.680 "nvme_io_md": false, 00:09:26.680 "write_zeroes": true, 00:09:26.680 "zcopy": false, 00:09:26.680 "get_zone_info": false, 00:09:26.680 "zone_management": false, 00:09:26.680 "zone_append": false, 00:09:26.680 "compare": false, 00:09:26.680 "compare_and_write": false, 00:09:26.680 "abort": false, 00:09:26.680 "seek_hole": true, 00:09:26.680 "seek_data": true, 00:09:26.680 "copy": false, 00:09:26.680 "nvme_iov_md": false 00:09:26.680 }, 00:09:26.680 "driver_specific": { 00:09:26.680 "lvol": { 00:09:26.680 "lvol_store_uuid": "b6779d1e-0c30-4b52-ada3-a9dece71420c", 00:09:26.680 "base_bdev": "aio_bdev", 00:09:26.680 "thin_provision": false, 00:09:26.680 "num_allocated_clusters": 38, 00:09:26.680 "snapshot": false, 00:09:26.680 "clone": false, 00:09:26.680 "esnap_clone": false 00:09:26.680 } 00:09:26.680 } 00:09:26.680 } 00:09:26.680 ] 00:09:26.680 19:04:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:26.680 19:04:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6779d1e-0c30-4b52-ada3-a9dece71420c 00:09:26.680 19:04:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:26.940 19:04:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:26.940 19:04:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6779d1e-0c30-4b52-ada3-a9dece71420c 00:09:26.940 19:04:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:27.199 19:04:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:27.199 19:04:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f02cff78-42a3-4c28-b7f3-da2ac22d6159 00:09:27.458 19:04:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b6779d1e-0c30-4b52-ada3-a9dece71420c 00:09:27.458 19:04:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:27.716 19:04:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:27.716 00:09:27.716 real 0m15.555s 00:09:27.716 user 0m15.342s 00:09:27.716 sys 0m1.230s 00:09:27.716 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.716 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:27.716 ************************************ 00:09:27.716 END TEST lvs_grow_clean 00:09:27.716 ************************************ 00:09:27.716 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:27.716 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:27.716 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.716 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:27.716 ************************************ 00:09:27.716 START TEST lvs_grow_dirty 00:09:27.716 ************************************ 00:09:27.716 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:27.716 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:27.716 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:27.716 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:27.975 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:27.975 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:27.975 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:27.975 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:27.975 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:27.975 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:27.975 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:27.975 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:28.235 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=7bc5b410-c609-4026-b124-c02c9abe61ec 00:09:28.235 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bc5b410-c609-4026-b124-c02c9abe61ec 00:09:28.235 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:28.496 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:28.496 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:28.496 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7bc5b410-c609-4026-b124-c02c9abe61ec lvol 150 00:09:28.755 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=746adbfe-f590-4d75-966d-f4e6a41ea07e 00:09:28.755 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:28.755 19:04:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:28.755 [2024-12-13 19:04:03.085038] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:28.755 [2024-12-13 19:04:03.085085] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:28.755 true 00:09:28.755 19:04:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bc5b410-c609-4026-b124-c02c9abe61ec 00:09:28.755 19:04:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:29.015 19:04:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:29.015 19:04:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:29.275 19:04:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 746adbfe-f590-4d75-966d-f4e6a41ea07e 00:09:29.534 19:04:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:29.534 [2024-12-13 19:04:03.823400] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:29.534 19:04:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:29.794 19:04:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=165939 00:09:29.794 19:04:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:29.794 19:04:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:29.794 19:04:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 165939 /var/tmp/bdevperf.sock 00:09:29.794 19:04:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 165939 ']' 00:09:29.794 19:04:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:29.794 19:04:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.794 19:04:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:29.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:29.794 19:04:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.794 19:04:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:29.794 [2024-12-13 19:04:04.050321] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:29.794 [2024-12-13 19:04:04.050369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165939 ] 00:09:29.794 [2024-12-13 19:04:04.144200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.794 [2024-12-13 19:04:04.166769] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.053 19:04:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.053 19:04:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:30.053 19:04:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:30.312 Nvme0n1 00:09:30.312 19:04:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:30.572 [ 00:09:30.572 { 00:09:30.572 "name": "Nvme0n1", 00:09:30.572 "aliases": [ 00:09:30.572 "746adbfe-f590-4d75-966d-f4e6a41ea07e" 00:09:30.572 ], 00:09:30.572 "product_name": "NVMe disk", 00:09:30.572 "block_size": 4096, 00:09:30.572 "num_blocks": 38912, 00:09:30.572 "uuid": "746adbfe-f590-4d75-966d-f4e6a41ea07e", 00:09:30.572 "numa_id": 1, 00:09:30.572 "assigned_rate_limits": { 00:09:30.572 "rw_ios_per_sec": 0, 00:09:30.572 "rw_mbytes_per_sec": 0, 00:09:30.572 "r_mbytes_per_sec": 0, 00:09:30.572 "w_mbytes_per_sec": 0 00:09:30.572 }, 00:09:30.572 "claimed": false, 00:09:30.572 "zoned": false, 00:09:30.572 "supported_io_types": { 00:09:30.572 "read": true, 00:09:30.572 "write": true, 00:09:30.572 "unmap": true, 00:09:30.572 "flush": true, 00:09:30.572 "reset": true, 00:09:30.572 "nvme_admin": true, 00:09:30.572 "nvme_io": true, 00:09:30.572 "nvme_io_md": false, 00:09:30.572 "write_zeroes": true, 00:09:30.572 "zcopy": false, 00:09:30.572 "get_zone_info": false, 00:09:30.572 "zone_management": false, 00:09:30.572 "zone_append": false, 00:09:30.572 "compare": true, 00:09:30.572 "compare_and_write": true, 00:09:30.572 "abort": true, 00:09:30.572 "seek_hole": false, 00:09:30.572 "seek_data": false, 00:09:30.572 "copy": true, 00:09:30.572 "nvme_iov_md": false 00:09:30.572 }, 00:09:30.572 "memory_domains": [ 00:09:30.572 { 00:09:30.572 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:09:30.572 "dma_device_type": 0 00:09:30.572 } 00:09:30.572 ], 00:09:30.572 "driver_specific": { 00:09:30.572 "nvme": [ 00:09:30.572 { 00:09:30.572 "trid": { 00:09:30.572 "trtype": "RDMA", 00:09:30.572 "adrfam": "IPv4", 00:09:30.572 "traddr": "192.168.100.8", 00:09:30.572 "trsvcid": "4420", 00:09:30.572 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:30.572 }, 00:09:30.572 "ctrlr_data": { 00:09:30.572 "cntlid": 1, 00:09:30.572 "vendor_id": "0x8086", 00:09:30.572 "model_number": "SPDK bdev Controller", 00:09:30.572 "serial_number": "SPDK0", 00:09:30.572 "firmware_revision": "25.01", 00:09:30.572 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:30.572 "oacs": { 00:09:30.572 "security": 0, 00:09:30.572 "format": 0, 00:09:30.572 "firmware": 0, 00:09:30.572 "ns_manage": 0 00:09:30.572 }, 00:09:30.572 "multi_ctrlr": true, 00:09:30.572 "ana_reporting": false 00:09:30.572 }, 00:09:30.572 "vs": { 00:09:30.572 "nvme_version": "1.3" 00:09:30.572 }, 00:09:30.572 "ns_data": { 00:09:30.572 "id": 1, 00:09:30.572 "can_share": true 00:09:30.572 } 00:09:30.572 } 00:09:30.572 ], 00:09:30.572 "mp_policy": "active_passive" 00:09:30.572 } 00:09:30.572 } 00:09:30.572 ] 00:09:30.572 19:04:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=165951 00:09:30.572 19:04:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:30.572 19:04:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:30.572 Running I/O for 10 seconds... 00:09:31.510 Latency(us) 00:09:31.510 [2024-12-13T18:04:05.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.510 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.510 Nvme0n1 : 1.00 34402.00 134.38 0.00 0.00 0.00 0.00 0.00 00:09:31.510 [2024-12-13T18:04:05.888Z] =================================================================================================================== 00:09:31.510 [2024-12-13T18:04:05.888Z] Total : 34402.00 134.38 0.00 0.00 0.00 0.00 0.00 00:09:31.510 00:09:32.448 19:04:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7bc5b410-c609-4026-b124-c02c9abe61ec 00:09:32.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.448 Nvme0n1 : 2.00 34703.50 135.56 0.00 0.00 0.00 0.00 0.00 00:09:32.448 [2024-12-13T18:04:06.826Z] =================================================================================================================== 00:09:32.448 [2024-12-13T18:04:06.826Z] Total : 34703.50 135.56 0.00 0.00 0.00 0.00 0.00 00:09:32.448 00:09:32.708 true 00:09:32.708 19:04:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bc5b410-c609-4026-b124-c02c9abe61ec 00:09:32.708 19:04:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:32.967 19:04:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:32.967 19:04:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:32.967 19:04:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 165951 00:09:33.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.536 Nvme0n1 : 3.00 34901.00 136.33 0.00 0.00 0.00 0.00 0.00 00:09:33.536 [2024-12-13T18:04:07.914Z] =================================================================================================================== 00:09:33.536 [2024-12-13T18:04:07.914Z] Total : 34901.00 136.33 0.00 0.00 0.00 0.00 0.00 00:09:33.536 00:09:34.474 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.474 Nvme0n1 : 4.00 35040.00 136.88 0.00 0.00 0.00 0.00 0.00 00:09:34.474 [2024-12-13T18:04:08.852Z] =================================================================================================================== 00:09:34.474 [2024-12-13T18:04:08.852Z] Total : 35040.00 136.88 0.00 0.00 0.00 0.00 0.00 00:09:34.474 00:09:35.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.853 Nvme0n1 : 5.00 35130.00 137.23 0.00 0.00 0.00 0.00 0.00 00:09:35.853 [2024-12-13T18:04:10.231Z] =================================================================================================================== 00:09:35.853 [2024-12-13T18:04:10.231Z] Total : 35130.00 137.23 0.00 0.00 0.00 0.00 0.00 00:09:35.853 00:09:36.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.791 Nvme0n1 : 6.00 35108.83 137.14 0.00 0.00 0.00 0.00 0.00 00:09:36.791 [2024-12-13T18:04:11.169Z] =================================================================================================================== 00:09:36.791 [2024-12-13T18:04:11.169Z] Total : 35108.83 137.14 0.00 0.00 0.00 0.00 0.00 00:09:36.791 00:09:37.728 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.728 Nvme0n1 : 7.00 35149.57 137.30 0.00 0.00 0.00 0.00 0.00 00:09:37.728 [2024-12-13T18:04:12.106Z] =================================================================================================================== 00:09:37.728 [2024-12-13T18:04:12.106Z] Total : 35149.57 137.30 0.00 0.00 0.00 0.00 0.00 00:09:37.728 00:09:38.665 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.665 Nvme0n1 : 8.00 35208.38 137.53 0.00 0.00 0.00 0.00 0.00 00:09:38.665 [2024-12-13T18:04:13.043Z] =================================================================================================================== 00:09:38.665 [2024-12-13T18:04:13.043Z] Total : 35208.38 137.53 0.00 0.00 0.00 0.00 0.00 00:09:38.665 00:09:39.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.602 Nvme0n1 : 9.00 35256.67 137.72 0.00 0.00 0.00 0.00 0.00 00:09:39.602 [2024-12-13T18:04:13.980Z] =================================================================================================================== 00:09:39.602 [2024-12-13T18:04:13.980Z] Total : 35256.67 137.72 0.00 0.00 0.00 0.00 0.00 00:09:39.602 00:09:40.540 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.540 Nvme0n1 : 10.00 35286.70 137.84 0.00 0.00 0.00 0.00 0.00 00:09:40.540 [2024-12-13T18:04:14.918Z] =================================================================================================================== 00:09:40.540 [2024-12-13T18:04:14.918Z] Total : 35286.70 137.84 0.00 0.00 0.00 0.00 0.00 00:09:40.540 00:09:40.540 00:09:40.540 Latency(us) 00:09:40.540 [2024-12-13T18:04:14.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.540 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.540 Nvme0n1 : 10.00 35285.73 137.83 0.00 0.00 3624.62 2713.19 15728.64 00:09:40.540 [2024-12-13T18:04:14.918Z] =================================================================================================================== 00:09:40.540 [2024-12-13T18:04:14.918Z] Total : 35285.73 137.83 0.00 0.00 3624.62 2713.19 15728.64 00:09:40.540 { 00:09:40.540 "results": [ 00:09:40.540 { 00:09:40.540 "job": "Nvme0n1", 00:09:40.540 "core_mask": "0x2", 00:09:40.540 "workload": "randwrite", 00:09:40.540 "status": "finished", 00:09:40.540 "queue_depth": 128, 00:09:40.540 "io_size": 4096, 00:09:40.540 "runtime": 10.002966, 00:09:40.540 "iops": 35285.73425122109, 00:09:40.540 "mibps": 137.83489941883238, 00:09:40.540 "io_failed": 0, 00:09:40.540 "io_timeout": 0, 00:09:40.540 "avg_latency_us": 3624.623590519093, 00:09:40.540 "min_latency_us": 2713.1904, 00:09:40.540 "max_latency_us": 15728.64 00:09:40.540 } 00:09:40.540 ], 00:09:40.540 "core_count": 1 00:09:40.540 } 00:09:40.540 19:04:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 165939 00:09:40.540 19:04:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 165939 ']' 00:09:40.540 19:04:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 165939 00:09:40.540 19:04:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:40.540 19:04:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.540 19:04:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 165939 00:09:40.800 19:04:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:40.800 19:04:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:40.800 19:04:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 165939' 00:09:40.800 killing process with pid 165939 00:09:40.800 19:04:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 165939 00:09:40.800 Received shutdown signal, test time was about 10.000000 seconds 00:09:40.800 00:09:40.800 Latency(us) 00:09:40.800 [2024-12-13T18:04:15.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.800 [2024-12-13T18:04:15.178Z] =================================================================================================================== 00:09:40.800 [2024-12-13T18:04:15.178Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:40.800 19:04:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 165939 00:09:40.800 19:04:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:41.058 19:04:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:41.317 19:04:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bc5b410-c609-4026-b124-c02c9abe61ec 00:09:41.317 19:04:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:41.576 19:04:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:41.576 19:04:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:41.576 19:04:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 162632 00:09:41.576 19:04:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 162632 00:09:41.576 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 162632 Killed "${NVMF_APP[@]}" "$@" 00:09:41.576 19:04:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:41.576 19:04:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:41.576 19:04:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:41.576 19:04:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:41.576 19:04:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:41.576 19:04:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=167874 00:09:41.577 19:04:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 167874 00:09:41.577 19:04:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:41.577 19:04:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 167874 ']' 00:09:41.577 19:04:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.577 19:04:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.577 19:04:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.577 19:04:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.577 19:04:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:41.577 [2024-12-13 19:04:15.807526] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:41.577 [2024-12-13 19:04:15.807578] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.577 [2024-12-13 19:04:15.901408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.577 [2024-12-13 19:04:15.921891] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.577 [2024-12-13 19:04:15.921923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.577 [2024-12-13 19:04:15.921935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.577 [2024-12-13 19:04:15.921943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.577 [2024-12-13 19:04:15.921967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.577 [2024-12-13 19:04:15.922553] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.836 19:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.836 19:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:41.836 19:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:41.836 19:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:41.836 19:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:41.836 19:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.836 19:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:42.096 [2024-12-13 19:04:16.235849] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:42.096 [2024-12-13 19:04:16.235933] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:42.096 [2024-12-13 19:04:16.235961] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:42.096 19:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:42.096 19:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 746adbfe-f590-4d75-966d-f4e6a41ea07e 00:09:42.096 19:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=746adbfe-f590-4d75-966d-f4e6a41ea07e 00:09:42.096 19:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:42.096 19:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:42.096 19:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:42.096 19:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:42.096 19:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:42.096 19:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 746adbfe-f590-4d75-966d-f4e6a41ea07e -t 2000 00:09:42.356 [ 00:09:42.356 { 00:09:42.356 "name": "746adbfe-f590-4d75-966d-f4e6a41ea07e", 00:09:42.356 "aliases": [ 00:09:42.356 "lvs/lvol" 00:09:42.356 ], 00:09:42.356 "product_name": "Logical Volume", 00:09:42.356 "block_size": 4096, 00:09:42.356 "num_blocks": 38912, 00:09:42.356 "uuid": "746adbfe-f590-4d75-966d-f4e6a41ea07e", 00:09:42.356 "assigned_rate_limits": { 00:09:42.356 "rw_ios_per_sec": 0, 00:09:42.356 "rw_mbytes_per_sec": 0, 00:09:42.356 "r_mbytes_per_sec": 0, 00:09:42.356 "w_mbytes_per_sec": 0 00:09:42.356 }, 00:09:42.356 "claimed": false, 00:09:42.356 "zoned": false, 00:09:42.356 "supported_io_types": { 00:09:42.356 "read": true, 00:09:42.356 "write": true, 00:09:42.356 "unmap": true, 00:09:42.356 "flush": false, 00:09:42.356 "reset": true, 00:09:42.356 "nvme_admin": false, 00:09:42.356 "nvme_io": false, 00:09:42.356 "nvme_io_md": false, 00:09:42.356 "write_zeroes": true, 00:09:42.356 "zcopy": false, 00:09:42.356 "get_zone_info": false, 00:09:42.356 "zone_management": false, 00:09:42.356 "zone_append": false, 00:09:42.356 "compare": false, 00:09:42.356 "compare_and_write": false, 00:09:42.356 "abort": false, 00:09:42.356 "seek_hole": true, 00:09:42.356 "seek_data": true, 00:09:42.356 "copy": false, 00:09:42.356 "nvme_iov_md": false 00:09:42.356 }, 00:09:42.356 "driver_specific": { 00:09:42.356 "lvol": { 00:09:42.356 "lvol_store_uuid": "7bc5b410-c609-4026-b124-c02c9abe61ec", 00:09:42.356 "base_bdev": "aio_bdev", 00:09:42.356 "thin_provision": false, 00:09:42.356 "num_allocated_clusters": 38, 00:09:42.356 "snapshot": false, 00:09:42.356 "clone": false, 00:09:42.356 "esnap_clone": false 00:09:42.356 } 00:09:42.356 } 00:09:42.356 } 00:09:42.356 ] 00:09:42.356 19:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:42.356 19:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bc5b410-c609-4026-b124-c02c9abe61ec 00:09:42.356 19:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:42.615 19:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:42.615 19:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bc5b410-c609-4026-b124-c02c9abe61ec 00:09:42.615 19:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:42.875 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:42.875 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:42.875 [2024-12-13 19:04:17.180456] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:42.875 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bc5b410-c609-4026-b124-c02c9abe61ec 00:09:42.875 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:42.875 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bc5b410-c609-4026-b124-c02c9abe61ec 00:09:42.875 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:42.875 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:42.875 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:42.875 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:42.875 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:42.875 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:42.875 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:42.875 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:09:42.875 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bc5b410-c609-4026-b124-c02c9abe61ec 00:09:43.135 request: 00:09:43.135 { 00:09:43.135 "uuid": "7bc5b410-c609-4026-b124-c02c9abe61ec", 00:09:43.135 "method": "bdev_lvol_get_lvstores", 00:09:43.135 "req_id": 1 00:09:43.135 } 00:09:43.135 Got JSON-RPC error response 00:09:43.135 response: 00:09:43.135 { 00:09:43.135 "code": -19, 00:09:43.135 "message": "No such device" 00:09:43.135 } 00:09:43.135 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:43.135 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:43.135 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:43.135 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:43.135 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:43.394 aio_bdev 00:09:43.394 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 746adbfe-f590-4d75-966d-f4e6a41ea07e 00:09:43.394 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=746adbfe-f590-4d75-966d-f4e6a41ea07e 00:09:43.394 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:43.394 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:43.394 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:43.394 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:43.394 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:43.654 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 746adbfe-f590-4d75-966d-f4e6a41ea07e -t 2000 00:09:43.654 [ 00:09:43.654 { 00:09:43.654 "name": "746adbfe-f590-4d75-966d-f4e6a41ea07e", 00:09:43.654 "aliases": [ 00:09:43.654 "lvs/lvol" 00:09:43.654 ], 00:09:43.654 "product_name": "Logical Volume", 00:09:43.654 "block_size": 4096, 00:09:43.654 "num_blocks": 38912, 00:09:43.654 "uuid": "746adbfe-f590-4d75-966d-f4e6a41ea07e", 00:09:43.654 "assigned_rate_limits": { 00:09:43.654 "rw_ios_per_sec": 0, 00:09:43.654 "rw_mbytes_per_sec": 0, 00:09:43.654 "r_mbytes_per_sec": 0, 00:09:43.654 "w_mbytes_per_sec": 0 00:09:43.654 }, 00:09:43.654 "claimed": false, 00:09:43.654 "zoned": false, 00:09:43.654 "supported_io_types": { 00:09:43.654 "read": true, 00:09:43.654 "write": true, 00:09:43.654 "unmap": true, 00:09:43.654 "flush": false, 00:09:43.654 "reset": true, 00:09:43.654 "nvme_admin": false, 00:09:43.654 "nvme_io": false, 00:09:43.654 "nvme_io_md": false, 00:09:43.654 "write_zeroes": true, 00:09:43.654 "zcopy": false, 00:09:43.654 "get_zone_info": false, 00:09:43.654 "zone_management": false, 00:09:43.654 "zone_append": false, 00:09:43.654 "compare": false, 00:09:43.654 "compare_and_write": false, 00:09:43.654 "abort": false, 00:09:43.654 "seek_hole": true, 00:09:43.654 "seek_data": true, 00:09:43.654 "copy": false, 00:09:43.654 "nvme_iov_md": false 00:09:43.654 }, 00:09:43.654 "driver_specific": { 00:09:43.654 "lvol": { 00:09:43.654 "lvol_store_uuid": "7bc5b410-c609-4026-b124-c02c9abe61ec", 00:09:43.654 "base_bdev": "aio_bdev", 00:09:43.654 "thin_provision": false, 00:09:43.654 "num_allocated_clusters": 38, 00:09:43.654 "snapshot": false, 00:09:43.654 "clone": false, 00:09:43.654 "esnap_clone": false 00:09:43.654 } 00:09:43.654 } 00:09:43.654 } 00:09:43.654 ] 00:09:43.654 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:43.654 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bc5b410-c609-4026-b124-c02c9abe61ec 00:09:43.654 19:04:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:43.914 19:04:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:43.914 19:04:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bc5b410-c609-4026-b124-c02c9abe61ec 00:09:43.914 19:04:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:44.173 19:04:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:44.173 19:04:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 746adbfe-f590-4d75-966d-f4e6a41ea07e 00:09:44.173 19:04:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7bc5b410-c609-4026-b124-c02c9abe61ec 00:09:44.433 19:04:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:44.692 19:04:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:44.692 00:09:44.692 real 0m16.900s 00:09:44.692 user 0m44.120s 00:09:44.692 sys 0m3.331s 00:09:44.692 19:04:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.692 19:04:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:44.692 ************************************ 00:09:44.692 END TEST lvs_grow_dirty 00:09:44.692 ************************************ 00:09:44.692 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:44.692 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:44.692 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:44.692 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:44.692 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:44.692 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:44.692 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:44.692 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:44.692 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:44.692 nvmf_trace.0 00:09:44.952 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:44.952 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:44.952 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:44.952 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:44.952 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:44.952 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:44.952 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:44.952 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:44.952 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:44.952 rmmod nvme_rdma 00:09:44.952 rmmod nvme_fabrics 00:09:44.952 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:44.952 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:44.952 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:44.952 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 167874 ']' 00:09:44.952 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 167874 00:09:44.952 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 167874 ']' 00:09:44.952 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 167874 00:09:44.952 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:44.952 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.952 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 167874 00:09:44.952 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.952 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.952 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 167874' 00:09:44.952 killing process with pid 167874 00:09:44.952 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 167874 00:09:44.952 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 167874 00:09:45.211 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:45.211 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:45.211 00:09:45.211 real 0m41.044s 00:09:45.211 user 1m5.383s 00:09:45.211 sys 0m10.721s 00:09:45.211 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.211 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:45.211 ************************************ 00:09:45.211 END TEST nvmf_lvs_grow 00:09:45.211 ************************************ 00:09:45.211 19:04:19 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:09:45.211 19:04:19 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:45.211 19:04:19 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.211 19:04:19 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:45.211 ************************************ 00:09:45.211 START TEST nvmf_bdev_io_wait 00:09:45.211 ************************************ 00:09:45.212 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:09:45.212 * Looking for test storage... 00:09:45.212 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:45.212 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:45.212 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:09:45.212 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:45.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.472 --rc genhtml_branch_coverage=1 00:09:45.472 --rc genhtml_function_coverage=1 00:09:45.472 --rc genhtml_legend=1 00:09:45.472 --rc geninfo_all_blocks=1 00:09:45.472 --rc geninfo_unexecuted_blocks=1 00:09:45.472 00:09:45.472 ' 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:45.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.472 --rc genhtml_branch_coverage=1 00:09:45.472 --rc genhtml_function_coverage=1 00:09:45.472 --rc genhtml_legend=1 00:09:45.472 --rc geninfo_all_blocks=1 00:09:45.472 --rc geninfo_unexecuted_blocks=1 00:09:45.472 00:09:45.472 ' 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:45.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.472 --rc genhtml_branch_coverage=1 00:09:45.472 --rc genhtml_function_coverage=1 00:09:45.472 --rc genhtml_legend=1 00:09:45.472 --rc geninfo_all_blocks=1 00:09:45.472 --rc geninfo_unexecuted_blocks=1 00:09:45.472 00:09:45.472 ' 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:45.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.472 --rc genhtml_branch_coverage=1 00:09:45.472 --rc genhtml_function_coverage=1 00:09:45.472 --rc genhtml_legend=1 00:09:45.472 --rc geninfo_all_blocks=1 00:09:45.472 --rc geninfo_unexecuted_blocks=1 00:09:45.472 00:09:45.472 ' 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:45.472 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:45.473 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:45.473 19:04:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.603 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:53.603 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:53.603 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:53.603 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:53.603 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:53.603 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:53.603 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:53.603 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:53.603 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:53.603 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:53.603 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:53.603 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:53.603 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:53.603 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:53.603 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:53.603 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:53.603 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:53.603 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:53.603 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:53.603 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:53.603 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:53.603 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:53.603 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:53.603 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:53.604 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:53.604 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:53.604 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:53.604 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # rdma_device_init 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:53.604 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:53.604 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:53.604 altname enp217s0f0np0 00:09:53.604 altname ens818f0np0 00:09:53.604 inet 192.168.100.8/24 scope global mlx_0_0 00:09:53.604 valid_lft forever preferred_lft forever 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:53.604 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:53.604 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:53.604 altname enp217s0f1np1 00:09:53.604 altname ens818f1np1 00:09:53.604 inet 192.168.100.9/24 scope global mlx_0_1 00:09:53.604 valid_lft forever preferred_lft forever 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:53.604 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:53.605 192.168.100.9' 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:53.605 192.168.100.9' 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # head -n 1 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:53.605 192.168.100.9' 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # tail -n +2 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # head -n 1 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=172047 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 172047 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 172047 ']' 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.605 19:04:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.605 [2024-12-13 19:04:27.038449] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:53.605 [2024-12-13 19:04:27.038500] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.605 [2024-12-13 19:04:27.127748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:53.605 [2024-12-13 19:04:27.150743] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.605 [2024-12-13 19:04:27.150780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.605 [2024-12-13 19:04:27.150790] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.605 [2024-12-13 19:04:27.150798] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.605 [2024-12-13 19:04:27.150805] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.605 [2024-12-13 19:04:27.152404] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.605 [2024-12-13 19:04:27.152515] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.605 [2024-12-13 19:04:27.152621] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.605 [2024-12-13 19:04:27.152623] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.605 [2024-12-13 19:04:27.337734] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18254b0/0x1829960) succeed. 00:09:53.605 [2024-12-13 19:04:27.347248] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1826af0/0x186b000) succeed. 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.605 Malloc0 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:53.605 [2024-12-13 19:04:27.534665] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.605 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=172137 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=172139 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:53.606 { 00:09:53.606 "params": { 00:09:53.606 "name": "Nvme$subsystem", 00:09:53.606 "trtype": "$TEST_TRANSPORT", 00:09:53.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.606 "adrfam": "ipv4", 00:09:53.606 "trsvcid": "$NVMF_PORT", 00:09:53.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.606 "hdgst": ${hdgst:-false}, 00:09:53.606 "ddgst": ${ddgst:-false} 00:09:53.606 }, 00:09:53.606 "method": "bdev_nvme_attach_controller" 00:09:53.606 } 00:09:53.606 EOF 00:09:53.606 )") 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=172141 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:53.606 { 00:09:53.606 "params": { 00:09:53.606 "name": "Nvme$subsystem", 00:09:53.606 "trtype": "$TEST_TRANSPORT", 00:09:53.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.606 "adrfam": "ipv4", 00:09:53.606 "trsvcid": "$NVMF_PORT", 00:09:53.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.606 "hdgst": ${hdgst:-false}, 00:09:53.606 "ddgst": ${ddgst:-false} 00:09:53.606 }, 00:09:53.606 "method": "bdev_nvme_attach_controller" 00:09:53.606 } 00:09:53.606 EOF 00:09:53.606 )") 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=172144 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:53.606 { 00:09:53.606 "params": { 00:09:53.606 "name": "Nvme$subsystem", 00:09:53.606 "trtype": "$TEST_TRANSPORT", 00:09:53.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.606 "adrfam": "ipv4", 00:09:53.606 "trsvcid": "$NVMF_PORT", 00:09:53.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.606 "hdgst": ${hdgst:-false}, 00:09:53.606 "ddgst": ${ddgst:-false} 00:09:53.606 }, 00:09:53.606 "method": "bdev_nvme_attach_controller" 00:09:53.606 } 00:09:53.606 EOF 00:09:53.606 )") 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:53.606 { 00:09:53.606 "params": { 00:09:53.606 "name": "Nvme$subsystem", 00:09:53.606 "trtype": "$TEST_TRANSPORT", 00:09:53.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.606 "adrfam": "ipv4", 00:09:53.606 "trsvcid": "$NVMF_PORT", 00:09:53.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.606 "hdgst": ${hdgst:-false}, 00:09:53.606 "ddgst": ${ddgst:-false} 00:09:53.606 }, 00:09:53.606 "method": "bdev_nvme_attach_controller" 00:09:53.606 } 00:09:53.606 EOF 00:09:53.606 )") 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 172137 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:53.606 "params": { 00:09:53.606 "name": "Nvme1", 00:09:53.606 "trtype": "rdma", 00:09:53.606 "traddr": "192.168.100.8", 00:09:53.606 "adrfam": "ipv4", 00:09:53.606 "trsvcid": "4420", 00:09:53.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.606 "hdgst": false, 00:09:53.606 "ddgst": false 00:09:53.606 }, 00:09:53.606 "method": "bdev_nvme_attach_controller" 00:09:53.606 }' 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:53.606 "params": { 00:09:53.606 "name": "Nvme1", 00:09:53.606 "trtype": "rdma", 00:09:53.606 "traddr": "192.168.100.8", 00:09:53.606 "adrfam": "ipv4", 00:09:53.606 "trsvcid": "4420", 00:09:53.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.606 "hdgst": false, 00:09:53.606 "ddgst": false 00:09:53.606 }, 00:09:53.606 "method": "bdev_nvme_attach_controller" 00:09:53.606 }' 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:53.606 "params": { 00:09:53.606 "name": "Nvme1", 00:09:53.606 "trtype": "rdma", 00:09:53.606 "traddr": "192.168.100.8", 00:09:53.606 "adrfam": "ipv4", 00:09:53.606 "trsvcid": "4420", 00:09:53.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.606 "hdgst": false, 00:09:53.606 "ddgst": false 00:09:53.606 }, 00:09:53.606 "method": "bdev_nvme_attach_controller" 00:09:53.606 }' 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:53.606 19:04:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:53.606 "params": { 00:09:53.606 "name": "Nvme1", 00:09:53.606 "trtype": "rdma", 00:09:53.606 "traddr": "192.168.100.8", 00:09:53.606 "adrfam": "ipv4", 00:09:53.606 "trsvcid": "4420", 00:09:53.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.606 "hdgst": false, 00:09:53.606 "ddgst": false 00:09:53.606 }, 00:09:53.606 "method": "bdev_nvme_attach_controller" 00:09:53.606 }' 00:09:53.606 [2024-12-13 19:04:27.588418] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:53.606 [2024-12-13 19:04:27.588419] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:53.606 [2024-12-13 19:04:27.588472] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-13 19:04:27.588472] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:53.606 --proc-type=auto ] 00:09:53.606 [2024-12-13 19:04:27.588715] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:53.606 [2024-12-13 19:04:27.588759] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:53.606 [2024-12-13 19:04:27.590477] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:53.606 [2024-12-13 19:04:27.590526] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:53.606 [2024-12-13 19:04:27.783474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.606 [2024-12-13 19:04:27.801510] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:09:53.606 [2024-12-13 19:04:27.842268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.606 [2024-12-13 19:04:27.855961] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:09:53.606 [2024-12-13 19:04:27.933747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.606 [2024-12-13 19:04:27.948904] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:09:53.866 [2024-12-13 19:04:28.023005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.866 [2024-12-13 19:04:28.045573] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:09:53.866 Running I/O for 1 seconds... 00:09:53.866 Running I/O for 1 seconds... 00:09:53.866 Running I/O for 1 seconds... 00:09:53.866 Running I/O for 1 seconds... 00:09:54.802 16974.00 IOPS, 66.30 MiB/s 00:09:54.802 Latency(us) 00:09:54.802 [2024-12-13T18:04:29.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.802 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:54.802 Nvme1n1 : 1.01 17012.06 66.45 0.00 0.00 7499.86 4325.38 12111.05 00:09:54.802 [2024-12-13T18:04:29.180Z] =================================================================================================================== 00:09:54.802 [2024-12-13T18:04:29.180Z] Total : 17012.06 66.45 0.00 0.00 7499.86 4325.38 12111.05 00:09:54.802 14376.00 IOPS, 56.16 MiB/s [2024-12-13T18:04:29.180Z] 251872.00 IOPS, 983.88 MiB/s 00:09:54.802 Latency(us) 00:09:54.802 [2024-12-13T18:04:29.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.802 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:54.802 Nvme1n1 : 1.00 251503.00 982.43 0.00 0.00 506.17 209.72 1795.69 00:09:54.802 [2024-12-13T18:04:29.180Z] =================================================================================================================== 00:09:54.802 [2024-12-13T18:04:29.180Z] Total : 251503.00 982.43 0.00 0.00 506.17 209.72 1795.69 00:09:54.802 00:09:54.802 Latency(us) 00:09:54.802 [2024-12-13T18:04:29.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.802 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:54.802 Nvme1n1 : 1.01 14435.95 56.39 0.00 0.00 8839.67 4535.09 16462.64 00:09:54.802 [2024-12-13T18:04:29.180Z] =================================================================================================================== 00:09:54.802 [2024-12-13T18:04:29.181Z] Total : 14435.95 56.39 0.00 0.00 8839.67 4535.09 16462.64 00:09:55.062 17824.00 IOPS, 69.62 MiB/s 00:09:55.062 Latency(us) 00:09:55.062 [2024-12-13T18:04:29.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.062 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:55.062 Nvme1n1 : 1.01 17908.25 69.95 0.00 0.00 7132.78 2713.19 16462.64 00:09:55.062 [2024-12-13T18:04:29.440Z] =================================================================================================================== 00:09:55.062 [2024-12-13T18:04:29.440Z] Total : 17908.25 69.95 0.00 0.00 7132.78 2713.19 16462.64 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 172139 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 172141 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 172144 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:55.062 rmmod nvme_rdma 00:09:55.062 rmmod nvme_fabrics 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 172047 ']' 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 172047 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 172047 ']' 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 172047 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.062 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 172047 00:09:55.321 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.321 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.321 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 172047' 00:09:55.321 killing process with pid 172047 00:09:55.321 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 172047 00:09:55.321 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 172047 00:09:55.581 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:55.581 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:55.581 00:09:55.581 real 0m10.275s 00:09:55.581 user 0m17.347s 00:09:55.581 sys 0m6.947s 00:09:55.581 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.581 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:55.581 ************************************ 00:09:55.581 END TEST nvmf_bdev_io_wait 00:09:55.581 ************************************ 00:09:55.581 19:04:29 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:09:55.581 19:04:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:55.581 19:04:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.581 19:04:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:55.581 ************************************ 00:09:55.581 START TEST nvmf_queue_depth 00:09:55.581 ************************************ 00:09:55.581 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:09:55.581 * Looking for test storage... 00:09:55.581 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:55.581 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:55.581 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:09:55.581 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:55.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.841 --rc genhtml_branch_coverage=1 00:09:55.841 --rc genhtml_function_coverage=1 00:09:55.841 --rc genhtml_legend=1 00:09:55.841 --rc geninfo_all_blocks=1 00:09:55.841 --rc geninfo_unexecuted_blocks=1 00:09:55.841 00:09:55.841 ' 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:55.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.841 --rc genhtml_branch_coverage=1 00:09:55.841 --rc genhtml_function_coverage=1 00:09:55.841 --rc genhtml_legend=1 00:09:55.841 --rc geninfo_all_blocks=1 00:09:55.841 --rc geninfo_unexecuted_blocks=1 00:09:55.841 00:09:55.841 ' 00:09:55.841 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:55.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.841 --rc genhtml_branch_coverage=1 00:09:55.841 --rc genhtml_function_coverage=1 00:09:55.842 --rc genhtml_legend=1 00:09:55.842 --rc geninfo_all_blocks=1 00:09:55.842 --rc geninfo_unexecuted_blocks=1 00:09:55.842 00:09:55.842 ' 00:09:55.842 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:55.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.842 --rc genhtml_branch_coverage=1 00:09:55.842 --rc genhtml_function_coverage=1 00:09:55.842 --rc genhtml_legend=1 00:09:55.842 --rc geninfo_all_blocks=1 00:09:55.842 --rc geninfo_unexecuted_blocks=1 00:09:55.842 00:09:55.842 ' 00:09:55.842 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.842 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:55.842 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.842 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.842 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.842 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.842 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.842 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.842 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.842 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.842 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.842 19:04:29 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:55.842 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:55.842 19:04:30 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:03.973 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:03.973 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.973 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:03.974 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:03.974 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # rdma_device_init 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:03.974 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:03.974 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:03.974 altname enp217s0f0np0 00:10:03.974 altname ens818f0np0 00:10:03.974 inet 192.168.100.8/24 scope global mlx_0_0 00:10:03.974 valid_lft forever preferred_lft forever 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:03.974 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:03.974 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:03.974 altname enp217s0f1np1 00:10:03.974 altname ens818f1np1 00:10:03.974 inet 192.168.100.9/24 scope global mlx_0_1 00:10:03.974 valid_lft forever preferred_lft forever 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:03.974 192.168.100.9' 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:03.974 192.168.100.9' 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # head -n 1 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:03.974 192.168.100.9' 00:10:03.974 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # tail -n +2 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # head -n 1 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=175873 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 175873 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 175873 ']' 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.975 [2024-12-13 19:04:37.390766] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:03.975 [2024-12-13 19:04:37.390824] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.975 [2024-12-13 19:04:37.483113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.975 [2024-12-13 19:04:37.503951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.975 [2024-12-13 19:04:37.503986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.975 [2024-12-13 19:04:37.503995] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.975 [2024-12-13 19:04:37.504003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.975 [2024-12-13 19:04:37.504009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.975 [2024-12-13 19:04:37.504612] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.975 [2024-12-13 19:04:37.676776] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1cb9680/0x1cbdb30) succeed. 00:10:03.975 [2024-12-13 19:04:37.685679] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1cbaae0/0x1cff1d0) succeed. 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.975 Malloc0 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.975 [2024-12-13 19:04:37.783593] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=176011 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 176011 /var/tmp/bdevperf.sock 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 176011 ']' 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:03.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.975 19:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.975 [2024-12-13 19:04:37.834601] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:03.975 [2024-12-13 19:04:37.834650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176011 ] 00:10:03.975 [2024-12-13 19:04:37.926985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.975 [2024-12-13 19:04:37.949322] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.975 19:04:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.975 19:04:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:03.975 19:04:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:03.975 19:04:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.975 19:04:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.975 NVMe0n1 00:10:03.975 19:04:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.975 19:04:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:03.975 Running I/O for 10 seconds... 00:10:05.851 17091.00 IOPS, 66.76 MiB/s [2024-12-13T18:04:41.609Z] 17408.00 IOPS, 68.00 MiB/s [2024-12-13T18:04:42.547Z] 17457.33 IOPS, 68.19 MiB/s [2024-12-13T18:04:43.484Z] 17620.25 IOPS, 68.83 MiB/s [2024-12-13T18:04:44.423Z] 17612.80 IOPS, 68.80 MiB/s [2024-12-13T18:04:45.360Z] 17672.83 IOPS, 69.03 MiB/s [2024-12-13T18:04:46.295Z] 17700.57 IOPS, 69.14 MiB/s [2024-12-13T18:04:47.676Z] 17684.00 IOPS, 69.08 MiB/s [2024-12-13T18:04:48.244Z] 17738.44 IOPS, 69.29 MiB/s [2024-12-13T18:04:48.504Z] 17715.20 IOPS, 69.20 MiB/s 00:10:14.126 Latency(us) 00:10:14.126 [2024-12-13T18:04:48.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:14.126 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:14.126 Verification LBA range: start 0x0 length 0x4000 00:10:14.126 NVMe0n1 : 10.04 17742.56 69.31 0.00 0.00 57573.31 22439.53 36490.44 00:10:14.126 [2024-12-13T18:04:48.504Z] =================================================================================================================== 00:10:14.126 [2024-12-13T18:04:48.504Z] Total : 17742.56 69.31 0.00 0.00 57573.31 22439.53 36490.44 00:10:14.126 { 00:10:14.126 "results": [ 00:10:14.126 { 00:10:14.126 "job": "NVMe0n1", 00:10:14.126 "core_mask": "0x1", 00:10:14.126 "workload": "verify", 00:10:14.126 "status": "finished", 00:10:14.126 "verify_range": { 00:10:14.126 "start": 0, 00:10:14.126 "length": 16384 00:10:14.126 }, 00:10:14.126 "queue_depth": 1024, 00:10:14.126 "io_size": 4096, 00:10:14.126 "runtime": 10.042296, 00:10:14.126 "iops": 17742.5560847838, 00:10:14.126 "mibps": 69.30685970618671, 00:10:14.126 "io_failed": 0, 00:10:14.126 "io_timeout": 0, 00:10:14.126 "avg_latency_us": 57573.311558620684, 00:10:14.126 "min_latency_us": 22439.5264, 00:10:14.126 "max_latency_us": 36490.4448 00:10:14.126 } 00:10:14.126 ], 00:10:14.126 "core_count": 1 00:10:14.127 } 00:10:14.127 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 176011 00:10:14.127 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 176011 ']' 00:10:14.127 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 176011 00:10:14.127 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:14.127 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.127 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 176011 00:10:14.127 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:14.127 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:14.127 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 176011' 00:10:14.127 killing process with pid 176011 00:10:14.127 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 176011 00:10:14.127 Received shutdown signal, test time was about 10.000000 seconds 00:10:14.127 00:10:14.127 Latency(us) 00:10:14.127 [2024-12-13T18:04:48.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:14.127 [2024-12-13T18:04:48.505Z] =================================================================================================================== 00:10:14.127 [2024-12-13T18:04:48.505Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:14.127 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 176011 00:10:14.386 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:14.386 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:14.386 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:14.386 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:14.386 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:14.386 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:14.386 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:14.386 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:14.386 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:14.386 rmmod nvme_rdma 00:10:14.386 rmmod nvme_fabrics 00:10:14.386 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:14.386 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:14.386 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:14.386 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 175873 ']' 00:10:14.386 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 175873 00:10:14.386 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 175873 ']' 00:10:14.386 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 175873 00:10:14.386 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:14.386 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.386 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 175873 00:10:14.386 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:14.386 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:14.386 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 175873' 00:10:14.386 killing process with pid 175873 00:10:14.386 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 175873 00:10:14.386 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 175873 00:10:14.647 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:14.647 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:14.647 00:10:14.647 real 0m19.073s 00:10:14.647 user 0m24.255s 00:10:14.647 sys 0m6.371s 00:10:14.647 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.647 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:14.647 ************************************ 00:10:14.647 END TEST nvmf_queue_depth 00:10:14.647 ************************************ 00:10:14.647 19:04:48 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:10:14.647 19:04:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:14.647 19:04:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.647 19:04:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:14.647 ************************************ 00:10:14.647 START TEST nvmf_target_multipath 00:10:14.647 ************************************ 00:10:14.647 19:04:48 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:10:14.907 * Looking for test storage... 00:10:14.907 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:14.907 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:14.907 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:10:14.907 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:14.907 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:14.907 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.907 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.907 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.907 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.907 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.907 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.907 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.907 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.907 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.907 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.907 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.907 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:14.907 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:14.907 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.907 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:14.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.908 --rc genhtml_branch_coverage=1 00:10:14.908 --rc genhtml_function_coverage=1 00:10:14.908 --rc genhtml_legend=1 00:10:14.908 --rc geninfo_all_blocks=1 00:10:14.908 --rc geninfo_unexecuted_blocks=1 00:10:14.908 00:10:14.908 ' 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:14.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.908 --rc genhtml_branch_coverage=1 00:10:14.908 --rc genhtml_function_coverage=1 00:10:14.908 --rc genhtml_legend=1 00:10:14.908 --rc geninfo_all_blocks=1 00:10:14.908 --rc geninfo_unexecuted_blocks=1 00:10:14.908 00:10:14.908 ' 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:14.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.908 --rc genhtml_branch_coverage=1 00:10:14.908 --rc genhtml_function_coverage=1 00:10:14.908 --rc genhtml_legend=1 00:10:14.908 --rc geninfo_all_blocks=1 00:10:14.908 --rc geninfo_unexecuted_blocks=1 00:10:14.908 00:10:14.908 ' 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:14.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.908 --rc genhtml_branch_coverage=1 00:10:14.908 --rc genhtml_function_coverage=1 00:10:14.908 --rc genhtml_legend=1 00:10:14.908 --rc geninfo_all_blocks=1 00:10:14.908 --rc geninfo_unexecuted_blocks=1 00:10:14.908 00:10:14.908 ' 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:14.908 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:14.908 19:04:49 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:23.041 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:23.041 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:23.041 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:23.041 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # rdma_device_init 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:23.041 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:23.042 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:23.042 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:23.042 altname enp217s0f0np0 00:10:23.042 altname ens818f0np0 00:10:23.042 inet 192.168.100.8/24 scope global mlx_0_0 00:10:23.042 valid_lft forever preferred_lft forever 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:23.042 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:23.042 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:23.042 altname enp217s0f1np1 00:10:23.042 altname ens818f1np1 00:10:23.042 inet 192.168.100.9/24 scope global mlx_0_1 00:10:23.042 valid_lft forever preferred_lft forever 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:23.042 192.168.100.9' 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:23.042 192.168.100.9' 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # head -n 1 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:23.042 192.168.100.9' 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # tail -n +2 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # head -n 1 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:10:23.042 run this test only with TCP transport for now 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:23.042 rmmod nvme_rdma 00:10:23.042 rmmod nvme_fabrics 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:23.042 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:23.043 00:10:23.043 real 0m7.619s 00:10:23.043 user 0m2.242s 00:10:23.043 sys 0m5.586s 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:23.043 ************************************ 00:10:23.043 END TEST nvmf_target_multipath 00:10:23.043 ************************************ 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:23.043 ************************************ 00:10:23.043 START TEST nvmf_zcopy 00:10:23.043 ************************************ 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:10:23.043 * Looking for test storage... 00:10:23.043 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:23.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.043 --rc genhtml_branch_coverage=1 00:10:23.043 --rc genhtml_function_coverage=1 00:10:23.043 --rc genhtml_legend=1 00:10:23.043 --rc geninfo_all_blocks=1 00:10:23.043 --rc geninfo_unexecuted_blocks=1 00:10:23.043 00:10:23.043 ' 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:23.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.043 --rc genhtml_branch_coverage=1 00:10:23.043 --rc genhtml_function_coverage=1 00:10:23.043 --rc genhtml_legend=1 00:10:23.043 --rc geninfo_all_blocks=1 00:10:23.043 --rc geninfo_unexecuted_blocks=1 00:10:23.043 00:10:23.043 ' 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:23.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.043 --rc genhtml_branch_coverage=1 00:10:23.043 --rc genhtml_function_coverage=1 00:10:23.043 --rc genhtml_legend=1 00:10:23.043 --rc geninfo_all_blocks=1 00:10:23.043 --rc geninfo_unexecuted_blocks=1 00:10:23.043 00:10:23.043 ' 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:23.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.043 --rc genhtml_branch_coverage=1 00:10:23.043 --rc genhtml_function_coverage=1 00:10:23.043 --rc genhtml_legend=1 00:10:23.043 --rc geninfo_all_blocks=1 00:10:23.043 --rc geninfo_unexecuted_blocks=1 00:10:23.043 00:10:23.043 ' 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.043 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:23.044 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.044 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:23.044 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:23.044 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:23.044 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.044 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.044 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.044 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:23.044 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:23.044 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:23.044 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:23.044 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:23.044 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:23.044 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:23.044 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.044 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:23.044 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:23.044 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:23.044 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.044 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.044 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.044 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:23.044 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:23.044 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:23.044 19:04:56 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:29.620 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:29.620 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:29.620 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:29.620 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # rdma_device_init 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:29.620 19:05:03 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:29.880 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:29.880 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:29.880 altname enp217s0f0np0 00:10:29.880 altname ens818f0np0 00:10:29.880 inet 192.168.100.8/24 scope global mlx_0_0 00:10:29.880 valid_lft forever preferred_lft forever 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:29.880 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:29.880 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:29.880 altname enp217s0f1np1 00:10:29.880 altname ens818f1np1 00:10:29.880 inet 192.168.100.9/24 scope global mlx_0_1 00:10:29.880 valid_lft forever preferred_lft forever 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.880 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:29.881 192.168.100.9' 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:29.881 192.168.100.9' 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # head -n 1 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:29.881 192.168.100.9' 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # tail -n +2 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # head -n 1 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=184695 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 184695 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 184695 ']' 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.881 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:30.139 [2024-12-13 19:05:04.291628] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:30.140 [2024-12-13 19:05:04.291679] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.140 [2024-12-13 19:05:04.381998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.140 [2024-12-13 19:05:04.403943] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.140 [2024-12-13 19:05:04.403974] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.140 [2024-12-13 19:05:04.403983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.140 [2024-12-13 19:05:04.403992] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.140 [2024-12-13 19:05:04.403999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.140 [2024-12-13 19:05:04.404491] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.140 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.140 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:30.140 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:30.140 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:30.140 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:10:30.399 Unsupported transport: rdma 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@812 -- # type=--id 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@813 -- # id=0 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:30.399 nvmf_trace.0 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@827 -- # return 0 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:30.399 rmmod nvme_rdma 00:10:30.399 rmmod nvme_fabrics 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 184695 ']' 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 184695 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 184695 ']' 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 184695 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 184695 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 184695' 00:10:30.399 killing process with pid 184695 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 184695 00:10:30.399 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 184695 00:10:30.659 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:30.659 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:30.659 00:10:30.659 real 0m8.201s 00:10:30.659 user 0m2.872s 00:10:30.659 sys 0m5.971s 00:10:30.659 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.659 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:30.659 ************************************ 00:10:30.659 END TEST nvmf_zcopy 00:10:30.659 ************************************ 00:10:30.659 19:05:04 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:10:30.659 19:05:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:30.659 19:05:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.659 19:05:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:30.659 ************************************ 00:10:30.659 START TEST nvmf_nmic 00:10:30.659 ************************************ 00:10:30.659 19:05:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:10:30.921 * Looking for test storage... 00:10:30.921 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:30.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.921 --rc genhtml_branch_coverage=1 00:10:30.921 --rc genhtml_function_coverage=1 00:10:30.921 --rc genhtml_legend=1 00:10:30.921 --rc geninfo_all_blocks=1 00:10:30.921 --rc geninfo_unexecuted_blocks=1 00:10:30.921 00:10:30.921 ' 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:30.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.921 --rc genhtml_branch_coverage=1 00:10:30.921 --rc genhtml_function_coverage=1 00:10:30.921 --rc genhtml_legend=1 00:10:30.921 --rc geninfo_all_blocks=1 00:10:30.921 --rc geninfo_unexecuted_blocks=1 00:10:30.921 00:10:30.921 ' 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:30.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.921 --rc genhtml_branch_coverage=1 00:10:30.921 --rc genhtml_function_coverage=1 00:10:30.921 --rc genhtml_legend=1 00:10:30.921 --rc geninfo_all_blocks=1 00:10:30.921 --rc geninfo_unexecuted_blocks=1 00:10:30.921 00:10:30.921 ' 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:30.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.921 --rc genhtml_branch_coverage=1 00:10:30.921 --rc genhtml_function_coverage=1 00:10:30.921 --rc genhtml_legend=1 00:10:30.921 --rc geninfo_all_blocks=1 00:10:30.921 --rc geninfo_unexecuted_blocks=1 00:10:30.921 00:10:30.921 ' 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.921 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:30.922 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:30.922 19:05:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:39.053 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:39.053 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:39.054 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:39.054 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:39.054 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # rdma_device_init 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:39.054 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:39.054 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:39.054 altname enp217s0f0np0 00:10:39.054 altname ens818f0np0 00:10:39.054 inet 192.168.100.8/24 scope global mlx_0_0 00:10:39.054 valid_lft forever preferred_lft forever 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:39.054 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:39.054 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:39.054 altname enp217s0f1np1 00:10:39.054 altname ens818f1np1 00:10:39.054 inet 192.168.100.9/24 scope global mlx_0_1 00:10:39.054 valid_lft forever preferred_lft forever 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:39.054 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:39.055 192.168.100.9' 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:39.055 192.168.100.9' 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # head -n 1 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:39.055 192.168.100.9' 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # tail -n +2 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # head -n 1 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=188329 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 188329 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 188329 ']' 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.055 [2024-12-13 19:05:12.596319] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:39.055 [2024-12-13 19:05:12.596377] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.055 [2024-12-13 19:05:12.686161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.055 [2024-12-13 19:05:12.710026] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.055 [2024-12-13 19:05:12.710067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.055 [2024-12-13 19:05:12.710076] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.055 [2024-12-13 19:05:12.710084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.055 [2024-12-13 19:05:12.710091] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.055 [2024-12-13 19:05:12.714078] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.055 [2024-12-13 19:05:12.714194] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.055 [2024-12-13 19:05:12.714107] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.055 [2024-12-13 19:05:12.714195] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.055 19:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.055 [2024-12-13 19:05:12.888855] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19ad540/0x19b19f0) succeed. 00:10:39.055 [2024-12-13 19:05:12.898123] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19aeb80/0x19f3090) succeed. 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.055 Malloc0 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.055 [2024-12-13 19:05:13.075894] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:39.055 test case1: single bdev can't be used in multiple subsystems 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.055 [2024-12-13 19:05:13.103716] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:39.055 [2024-12-13 19:05:13.103737] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:39.055 [2024-12-13 19:05:13.103747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.055 request: 00:10:39.055 { 00:10:39.055 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:39.055 "namespace": { 00:10:39.055 "bdev_name": "Malloc0", 00:10:39.055 "no_auto_visible": false, 00:10:39.055 "hide_metadata": false 00:10:39.055 }, 00:10:39.055 "method": "nvmf_subsystem_add_ns", 00:10:39.055 "req_id": 1 00:10:39.055 } 00:10:39.055 Got JSON-RPC error response 00:10:39.055 response: 00:10:39.055 { 00:10:39.055 "code": -32602, 00:10:39.055 "message": "Invalid parameters" 00:10:39.055 } 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:39.055 Adding namespace failed - expected result. 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:39.055 test case2: host connect to nvmf target in multiple paths 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.055 [2024-12-13 19:05:13.119791] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.055 19:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:39.993 19:05:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:10:40.934 19:05:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:40.934 19:05:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:40.934 19:05:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:40.934 19:05:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:40.934 19:05:15 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:42.839 19:05:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:42.839 19:05:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:42.839 19:05:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:42.839 19:05:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:42.839 19:05:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:42.839 19:05:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:42.839 19:05:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:42.839 [global] 00:10:42.839 thread=1 00:10:42.839 invalidate=1 00:10:42.839 rw=write 00:10:42.839 time_based=1 00:10:42.839 runtime=1 00:10:42.839 ioengine=libaio 00:10:42.839 direct=1 00:10:42.839 bs=4096 00:10:42.839 iodepth=1 00:10:42.839 norandommap=0 00:10:42.839 numjobs=1 00:10:42.839 00:10:42.839 verify_dump=1 00:10:42.839 verify_backlog=512 00:10:42.839 verify_state_save=0 00:10:42.839 do_verify=1 00:10:42.839 verify=crc32c-intel 00:10:42.839 [job0] 00:10:42.839 filename=/dev/nvme0n1 00:10:42.839 Could not set queue depth (nvme0n1) 00:10:43.406 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.406 fio-3.35 00:10:43.406 Starting 1 thread 00:10:44.786 00:10:44.786 job0: (groupid=0, jobs=1): err= 0: pid=189310: Fri Dec 13 19:05:18 2024 00:10:44.786 read: IOPS=6815, BW=26.6MiB/s (27.9MB/s)(26.6MiB/1001msec) 00:10:44.786 slat (nsec): min=8104, max=38722, avg=8711.95, stdev=932.47 00:10:44.786 clat (nsec): min=39772, max=83760, avg=60022.85, stdev=3536.78 00:10:44.786 lat (usec): min=59, max=122, avg=68.73, stdev= 3.66 00:10:44.786 clat percentiles (nsec): 00:10:44.786 | 1.00th=[52992], 5.00th=[55040], 10.00th=[55552], 20.00th=[57088], 00:10:44.786 | 30.00th=[58112], 40.00th=[58624], 50.00th=[59648], 60.00th=[60672], 00:10:44.786 | 70.00th=[61696], 80.00th=[62720], 90.00th=[64768], 95.00th=[66048], 00:10:44.786 | 99.00th=[70144], 99.50th=[71168], 99.90th=[76288], 99.95th=[78336], 00:10:44.786 | 99.99th=[83456] 00:10:44.786 write: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec); 0 zone resets 00:10:44.786 slat (nsec): min=8508, max=49042, avg=11341.35, stdev=1114.96 00:10:44.786 clat (usec): min=42, max=199, avg=57.64, stdev= 4.18 00:10:44.786 lat (usec): min=60, max=211, avg=68.99, stdev= 4.35 00:10:44.786 clat percentiles (usec): 00:10:44.786 | 1.00th=[ 51], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 55], 00:10:44.786 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:10:44.786 | 70.00th=[ 60], 80.00th=[ 61], 90.00th=[ 63], 95.00th=[ 64], 00:10:44.786 | 99.00th=[ 68], 99.50th=[ 70], 99.90th=[ 77], 99.95th=[ 99], 00:10:44.786 | 99.99th=[ 200] 00:10:44.786 bw ( KiB/s): min=28672, max=28672, per=100.00%, avg=28672.00, stdev= 0.00, samples=1 00:10:44.786 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:10:44.786 lat (usec) : 50=0.16%, 100=99.82%, 250=0.02% 00:10:44.786 cpu : usr=11.40%, sys=17.90%, ctx=13990, majf=0, minf=1 00:10:44.786 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.786 issued rwts: total=6822,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.786 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.786 00:10:44.786 Run status group 0 (all jobs): 00:10:44.786 READ: bw=26.6MiB/s (27.9MB/s), 26.6MiB/s-26.6MiB/s (27.9MB/s-27.9MB/s), io=26.6MiB (27.9MB), run=1001-1001msec 00:10:44.786 WRITE: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:10:44.786 00:10:44.786 Disk stats (read/write): 00:10:44.786 nvme0n1: ios=6193/6419, merge=0/0, ticks=325/302, in_queue=627, util=90.58% 00:10:44.786 19:05:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:46.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:46.692 rmmod nvme_rdma 00:10:46.692 rmmod nvme_fabrics 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 188329 ']' 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 188329 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 188329 ']' 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 188329 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 188329 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 188329' 00:10:46.692 killing process with pid 188329 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 188329 00:10:46.692 19:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 188329 00:10:46.951 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:46.951 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:46.951 00:10:46.951 real 0m16.284s 00:10:46.951 user 0m44.723s 00:10:46.951 sys 0m6.793s 00:10:46.951 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.951 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.951 ************************************ 00:10:46.951 END TEST nvmf_nmic 00:10:46.951 ************************************ 00:10:46.951 19:05:21 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:10:46.951 19:05:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:46.951 19:05:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.951 19:05:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:46.951 ************************************ 00:10:46.951 START TEST nvmf_fio_target 00:10:46.951 ************************************ 00:10:46.951 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:10:47.211 * Looking for test storage... 00:10:47.211 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:47.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.211 --rc genhtml_branch_coverage=1 00:10:47.211 --rc genhtml_function_coverage=1 00:10:47.211 --rc genhtml_legend=1 00:10:47.211 --rc geninfo_all_blocks=1 00:10:47.211 --rc geninfo_unexecuted_blocks=1 00:10:47.211 00:10:47.211 ' 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:47.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.211 --rc genhtml_branch_coverage=1 00:10:47.211 --rc genhtml_function_coverage=1 00:10:47.211 --rc genhtml_legend=1 00:10:47.211 --rc geninfo_all_blocks=1 00:10:47.211 --rc geninfo_unexecuted_blocks=1 00:10:47.211 00:10:47.211 ' 00:10:47.211 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:47.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.211 --rc genhtml_branch_coverage=1 00:10:47.211 --rc genhtml_function_coverage=1 00:10:47.211 --rc genhtml_legend=1 00:10:47.212 --rc geninfo_all_blocks=1 00:10:47.212 --rc geninfo_unexecuted_blocks=1 00:10:47.212 00:10:47.212 ' 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:47.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.212 --rc genhtml_branch_coverage=1 00:10:47.212 --rc genhtml_function_coverage=1 00:10:47.212 --rc genhtml_legend=1 00:10:47.212 --rc geninfo_all_blocks=1 00:10:47.212 --rc geninfo_unexecuted_blocks=1 00:10:47.212 00:10:47.212 ' 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:47.212 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:47.212 19:05:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:55.343 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:55.343 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:55.343 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:55.343 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # rdma_device_init 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:55.343 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:55.344 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:55.344 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:55.344 altname enp217s0f0np0 00:10:55.344 altname ens818f0np0 00:10:55.344 inet 192.168.100.8/24 scope global mlx_0_0 00:10:55.344 valid_lft forever preferred_lft forever 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:55.344 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:55.344 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:55.344 altname enp217s0f1np1 00:10:55.344 altname ens818f1np1 00:10:55.344 inet 192.168.100.9/24 scope global mlx_0_1 00:10:55.344 valid_lft forever preferred_lft forever 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:55.344 192.168.100.9' 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:55.344 192.168.100.9' 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # head -n 1 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:55.344 192.168.100.9' 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # tail -n +2 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # head -n 1 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=193292 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 193292 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 193292 ']' 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.344 19:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.344 [2024-12-13 19:05:28.916031] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:55.344 [2024-12-13 19:05:28.916083] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.344 [2024-12-13 19:05:29.004732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:55.344 [2024-12-13 19:05:29.027268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.344 [2024-12-13 19:05:29.027308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.344 [2024-12-13 19:05:29.027317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.344 [2024-12-13 19:05:29.027326] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.344 [2024-12-13 19:05:29.027332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.344 [2024-12-13 19:05:29.029064] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.344 [2024-12-13 19:05:29.029135] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.344 [2024-12-13 19:05:29.029246] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.344 [2024-12-13 19:05:29.029247] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.344 19:05:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.344 19:05:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:55.344 19:05:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:55.344 19:05:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:55.344 19:05:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.344 19:05:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.345 19:05:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:55.345 [2024-12-13 19:05:29.373983] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1144540/0x11489f0) succeed. 00:10:55.345 [2024-12-13 19:05:29.383131] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1145b80/0x118a090) succeed. 00:10:55.345 19:05:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:55.604 19:05:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:55.604 19:05:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:55.604 19:05:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:55.604 19:05:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:55.863 19:05:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:55.863 19:05:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:56.122 19:05:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:56.122 19:05:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:56.381 19:05:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:56.641 19:05:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:56.641 19:05:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:56.900 19:05:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:56.900 19:05:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:56.900 19:05:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:56.900 19:05:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:57.159 19:05:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:57.418 19:05:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:57.418 19:05:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:57.678 19:05:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:57.678 19:05:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:57.678 19:05:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:57.937 [2024-12-13 19:05:32.196732] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:57.937 19:05:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:58.196 19:05:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:58.455 19:05:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:59.391 19:05:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:59.391 19:05:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:59.391 19:05:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:59.391 19:05:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:59.391 19:05:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:59.391 19:05:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:01.296 19:05:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:01.296 19:05:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:01.296 19:05:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:01.296 19:05:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:01.296 19:05:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:01.296 19:05:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:01.296 19:05:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:01.296 [global] 00:11:01.296 thread=1 00:11:01.296 invalidate=1 00:11:01.296 rw=write 00:11:01.296 time_based=1 00:11:01.296 runtime=1 00:11:01.296 ioengine=libaio 00:11:01.296 direct=1 00:11:01.296 bs=4096 00:11:01.296 iodepth=1 00:11:01.296 norandommap=0 00:11:01.296 numjobs=1 00:11:01.296 00:11:01.296 verify_dump=1 00:11:01.296 verify_backlog=512 00:11:01.296 verify_state_save=0 00:11:01.296 do_verify=1 00:11:01.296 verify=crc32c-intel 00:11:01.296 [job0] 00:11:01.296 filename=/dev/nvme0n1 00:11:01.296 [job1] 00:11:01.296 filename=/dev/nvme0n2 00:11:01.296 [job2] 00:11:01.296 filename=/dev/nvme0n3 00:11:01.296 [job3] 00:11:01.296 filename=/dev/nvme0n4 00:11:01.612 Could not set queue depth (nvme0n1) 00:11:01.612 Could not set queue depth (nvme0n2) 00:11:01.612 Could not set queue depth (nvme0n3) 00:11:01.612 Could not set queue depth (nvme0n4) 00:11:01.874 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:01.874 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:01.874 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:01.874 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:01.874 fio-3.35 00:11:01.874 Starting 4 threads 00:11:03.265 00:11:03.265 job0: (groupid=0, jobs=1): err= 0: pid=194830: Fri Dec 13 19:05:37 2024 00:11:03.265 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:11:03.265 slat (nsec): min=8212, max=20052, avg=8779.85, stdev=800.14 00:11:03.265 clat (usec): min=64, max=118, avg=77.77, stdev= 4.84 00:11:03.265 lat (usec): min=72, max=126, avg=86.55, stdev= 4.93 00:11:03.265 clat percentiles (usec): 00:11:03.265 | 1.00th=[ 70], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 75], 00:11:03.265 | 30.00th=[ 76], 40.00th=[ 77], 50.00th=[ 78], 60.00th=[ 79], 00:11:03.265 | 70.00th=[ 80], 80.00th=[ 82], 90.00th=[ 84], 95.00th=[ 87], 00:11:03.265 | 99.00th=[ 92], 99.50th=[ 94], 99.90th=[ 108], 99.95th=[ 109], 00:11:03.265 | 99.99th=[ 119] 00:11:03.265 write: IOPS=5634, BW=22.0MiB/s (23.1MB/s)(22.0MiB/1001msec); 0 zone resets 00:11:03.265 slat (nsec): min=10557, max=42415, avg=11434.23, stdev=981.62 00:11:03.265 clat (usec): min=60, max=164, avg=74.21, stdev= 4.79 00:11:03.265 lat (usec): min=71, max=175, avg=85.64, stdev= 4.92 00:11:03.265 clat percentiles (usec): 00:11:03.265 | 1.00th=[ 66], 5.00th=[ 68], 10.00th=[ 70], 20.00th=[ 71], 00:11:03.265 | 30.00th=[ 73], 40.00th=[ 74], 50.00th=[ 75], 60.00th=[ 76], 00:11:03.265 | 70.00th=[ 77], 80.00th=[ 78], 90.00th=[ 81], 95.00th=[ 83], 00:11:03.265 | 99.00th=[ 88], 99.50th=[ 89], 99.90th=[ 97], 99.95th=[ 103], 00:11:03.265 | 99.99th=[ 165] 00:11:03.265 bw ( KiB/s): min=24576, max=24576, per=38.27%, avg=24576.00, stdev= 0.00, samples=1 00:11:03.266 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:11:03.266 lat (usec) : 100=99.83%, 250=0.17% 00:11:03.266 cpu : usr=8.60%, sys=15.10%, ctx=11272, majf=0, minf=1 00:11:03.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.266 issued rwts: total=5632,5640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.266 job1: (groupid=0, jobs=1): err= 0: pid=194831: Fri Dec 13 19:05:37 2024 00:11:03.266 read: IOPS=3308, BW=12.9MiB/s (13.6MB/s)(12.9MiB/1001msec) 00:11:03.266 slat (nsec): min=8211, max=33186, avg=9488.41, stdev=2328.15 00:11:03.266 clat (usec): min=62, max=223, avg=134.57, stdev=35.96 00:11:03.266 lat (usec): min=76, max=231, avg=144.06, stdev=35.14 00:11:03.266 clat percentiles (usec): 00:11:03.266 | 1.00th=[ 72], 5.00th=[ 76], 10.00th=[ 80], 20.00th=[ 86], 00:11:03.266 | 30.00th=[ 130], 40.00th=[ 141], 50.00th=[ 147], 60.00th=[ 151], 00:11:03.266 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 174], 95.00th=[ 188], 00:11:03.266 | 99.00th=[ 206], 99.50th=[ 212], 99.90th=[ 217], 99.95th=[ 219], 00:11:03.266 | 99.99th=[ 225] 00:11:03.266 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:11:03.266 slat (nsec): min=10237, max=54923, avg=11814.22, stdev=2592.29 00:11:03.266 clat (usec): min=59, max=202, avg=129.41, stdev=31.01 00:11:03.266 lat (usec): min=73, max=213, avg=141.22, stdev=30.16 00:11:03.266 clat percentiles (usec): 00:11:03.266 | 1.00th=[ 68], 5.00th=[ 73], 10.00th=[ 77], 20.00th=[ 86], 00:11:03.266 | 30.00th=[ 130], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 141], 00:11:03.266 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 163], 95.00th=[ 174], 00:11:03.266 | 99.00th=[ 192], 99.50th=[ 196], 99.90th=[ 200], 99.95th=[ 202], 00:11:03.266 | 99.99th=[ 204] 00:11:03.266 bw ( KiB/s): min=13184, max=13184, per=20.53%, avg=13184.00, stdev= 0.00, samples=1 00:11:03.266 iops : min= 3296, max= 3296, avg=3296.00, stdev= 0.00, samples=1 00:11:03.266 lat (usec) : 100=24.70%, 250=75.30% 00:11:03.266 cpu : usr=5.30%, sys=9.40%, ctx=6896, majf=0, minf=1 00:11:03.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.266 issued rwts: total=3312,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.266 job2: (groupid=0, jobs=1): err= 0: pid=194832: Fri Dec 13 19:05:37 2024 00:11:03.266 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:11:03.266 slat (nsec): min=8496, max=19793, avg=9263.84, stdev=804.40 00:11:03.266 clat (usec): min=74, max=291, avg=146.19, stdev=23.13 00:11:03.266 lat (usec): min=83, max=309, avg=155.46, stdev=23.18 00:11:03.266 clat percentiles (usec): 00:11:03.266 | 1.00th=[ 82], 5.00th=[ 96], 10.00th=[ 123], 20.00th=[ 133], 00:11:03.266 | 30.00th=[ 139], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 151], 00:11:03.266 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 172], 95.00th=[ 188], 00:11:03.266 | 99.00th=[ 202], 99.50th=[ 206], 99.90th=[ 215], 99.95th=[ 235], 00:11:03.266 | 99.99th=[ 293] 00:11:03.266 write: IOPS=3418, BW=13.4MiB/s (14.0MB/s)(13.4MiB/1001msec); 0 zone resets 00:11:03.266 slat (nsec): min=10417, max=51409, avg=11503.95, stdev=1502.44 00:11:03.266 clat (usec): min=69, max=210, avg=136.89, stdev=21.56 00:11:03.266 lat (usec): min=80, max=224, avg=148.39, stdev=21.58 00:11:03.266 clat percentiles (usec): 00:11:03.266 | 1.00th=[ 76], 5.00th=[ 88], 10.00th=[ 116], 20.00th=[ 126], 00:11:03.266 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 141], 00:11:03.266 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 163], 95.00th=[ 176], 00:11:03.266 | 99.00th=[ 190], 99.50th=[ 194], 99.90th=[ 200], 99.95th=[ 200], 00:11:03.266 | 99.99th=[ 210] 00:11:03.266 bw ( KiB/s): min=13192, max=13192, per=20.54%, avg=13192.00, stdev= 0.00, samples=1 00:11:03.266 iops : min= 3298, max= 3298, avg=3298.00, stdev= 0.00, samples=1 00:11:03.266 lat (usec) : 100=6.22%, 250=93.76%, 500=0.02% 00:11:03.266 cpu : usr=6.00%, sys=7.80%, ctx=6497, majf=0, minf=1 00:11:03.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.266 issued rwts: total=3072,3422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.266 job3: (groupid=0, jobs=1): err= 0: pid=194833: Fri Dec 13 19:05:37 2024 00:11:03.266 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:11:03.266 slat (nsec): min=8413, max=22053, avg=9086.95, stdev=856.01 00:11:03.266 clat (usec): min=73, max=385, avg=146.38, stdev=23.59 00:11:03.266 lat (usec): min=82, max=394, avg=155.46, stdev=23.62 00:11:03.266 clat percentiles (usec): 00:11:03.266 | 1.00th=[ 82], 5.00th=[ 95], 10.00th=[ 122], 20.00th=[ 133], 00:11:03.266 | 30.00th=[ 139], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 151], 00:11:03.266 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 174], 95.00th=[ 188], 00:11:03.266 | 99.00th=[ 206], 99.50th=[ 210], 99.90th=[ 217], 99.95th=[ 225], 00:11:03.266 | 99.99th=[ 388] 00:11:03.266 write: IOPS=3419, BW=13.4MiB/s (14.0MB/s)(13.4MiB/1001msec); 0 zone resets 00:11:03.266 slat (nsec): min=10453, max=50339, avg=11428.20, stdev=1249.08 00:11:03.266 clat (usec): min=68, max=202, avg=136.92, stdev=21.39 00:11:03.266 lat (usec): min=80, max=213, avg=148.34, stdev=21.45 00:11:03.266 clat percentiles (usec): 00:11:03.266 | 1.00th=[ 77], 5.00th=[ 89], 10.00th=[ 116], 20.00th=[ 126], 00:11:03.266 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 141], 00:11:03.266 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 161], 95.00th=[ 176], 00:11:03.266 | 99.00th=[ 192], 99.50th=[ 196], 99.90th=[ 202], 99.95th=[ 202], 00:11:03.266 | 99.99th=[ 202] 00:11:03.266 bw ( KiB/s): min=13192, max=13192, per=20.54%, avg=13192.00, stdev= 0.00, samples=1 00:11:03.266 iops : min= 3298, max= 3298, avg=3298.00, stdev= 0.00, samples=1 00:11:03.266 lat (usec) : 100=6.25%, 250=93.73%, 500=0.02% 00:11:03.266 cpu : usr=4.80%, sys=8.90%, ctx=6495, majf=0, minf=1 00:11:03.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.266 issued rwts: total=3072,3423,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.266 00:11:03.266 Run status group 0 (all jobs): 00:11:03.266 READ: bw=58.9MiB/s (61.7MB/s), 12.0MiB/s-22.0MiB/s (12.6MB/s-23.0MB/s), io=58.9MiB (61.8MB), run=1001-1001msec 00:11:03.266 WRITE: bw=62.7MiB/s (65.8MB/s), 13.4MiB/s-22.0MiB/s (14.0MB/s-23.1MB/s), io=62.8MiB (65.8MB), run=1001-1001msec 00:11:03.266 00:11:03.266 Disk stats (read/write): 00:11:03.266 nvme0n1: ios=4657/4787, merge=0/0, ticks=321/315, in_queue=636, util=83.97% 00:11:03.266 nvme0n2: ios=2560/2726, merge=0/0, ticks=353/345, in_queue=698, util=84.97% 00:11:03.266 nvme0n3: ios=2560/2726, merge=0/0, ticks=362/358, in_queue=720, util=88.31% 00:11:03.266 nvme0n4: ios=2560/2727, merge=0/0, ticks=356/352, in_queue=708, util=89.35% 00:11:03.266 19:05:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:03.266 [global] 00:11:03.266 thread=1 00:11:03.266 invalidate=1 00:11:03.266 rw=randwrite 00:11:03.266 time_based=1 00:11:03.266 runtime=1 00:11:03.266 ioengine=libaio 00:11:03.266 direct=1 00:11:03.266 bs=4096 00:11:03.266 iodepth=1 00:11:03.266 norandommap=0 00:11:03.266 numjobs=1 00:11:03.266 00:11:03.266 verify_dump=1 00:11:03.266 verify_backlog=512 00:11:03.266 verify_state_save=0 00:11:03.266 do_verify=1 00:11:03.266 verify=crc32c-intel 00:11:03.266 [job0] 00:11:03.266 filename=/dev/nvme0n1 00:11:03.266 [job1] 00:11:03.266 filename=/dev/nvme0n2 00:11:03.266 [job2] 00:11:03.266 filename=/dev/nvme0n3 00:11:03.266 [job3] 00:11:03.266 filename=/dev/nvme0n4 00:11:03.266 Could not set queue depth (nvme0n1) 00:11:03.266 Could not set queue depth (nvme0n2) 00:11:03.266 Could not set queue depth (nvme0n3) 00:11:03.266 Could not set queue depth (nvme0n4) 00:11:03.524 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.524 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.524 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.524 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.524 fio-3.35 00:11:03.524 Starting 4 threads 00:11:04.907 00:11:04.907 job0: (groupid=0, jobs=1): err= 0: pid=195259: Fri Dec 13 19:05:38 2024 00:11:04.907 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:11:04.907 slat (nsec): min=8176, max=29191, avg=8777.98, stdev=837.99 00:11:04.907 clat (usec): min=65, max=166, avg=94.89, stdev=21.05 00:11:04.907 lat (usec): min=73, max=175, avg=103.67, stdev=21.06 00:11:04.907 clat percentiles (usec): 00:11:04.907 | 1.00th=[ 69], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 76], 00:11:04.907 | 30.00th=[ 78], 40.00th=[ 80], 50.00th=[ 83], 60.00th=[ 111], 00:11:04.907 | 70.00th=[ 115], 80.00th=[ 119], 90.00th=[ 123], 95.00th=[ 126], 00:11:04.907 | 99.00th=[ 133], 99.50th=[ 137], 99.90th=[ 141], 99.95th=[ 143], 00:11:04.907 | 99.99th=[ 167] 00:11:04.907 write: IOPS=4890, BW=19.1MiB/s (20.0MB/s)(19.1MiB/1001msec); 0 zone resets 00:11:04.907 slat (nsec): min=10029, max=40344, avg=11065.95, stdev=1202.38 00:11:04.907 clat (usec): min=59, max=176, avg=90.67, stdev=23.36 00:11:04.907 lat (usec): min=69, max=186, avg=101.73, stdev=23.32 00:11:04.907 clat percentiles (usec): 00:11:04.907 | 1.00th=[ 65], 5.00th=[ 68], 10.00th=[ 70], 20.00th=[ 72], 00:11:04.907 | 30.00th=[ 74], 40.00th=[ 76], 50.00th=[ 78], 60.00th=[ 83], 00:11:04.907 | 70.00th=[ 114], 80.00th=[ 118], 90.00th=[ 123], 95.00th=[ 127], 00:11:04.907 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 167], 99.95th=[ 167], 00:11:04.907 | 99.99th=[ 178] 00:11:04.907 bw ( KiB/s): min=24576, max=24576, per=39.31%, avg=24576.00, stdev= 0.00, samples=1 00:11:04.907 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:11:04.907 lat (usec) : 100=59.65%, 250=40.35% 00:11:04.907 cpu : usr=8.10%, sys=11.80%, ctx=9503, majf=0, minf=1 00:11:04.907 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:04.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.907 issued rwts: total=4608,4895,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.907 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:04.907 job1: (groupid=0, jobs=1): err= 0: pid=195261: Fri Dec 13 19:05:38 2024 00:11:04.907 read: IOPS=3287, BW=12.8MiB/s (13.5MB/s)(12.9MiB/1001msec) 00:11:04.907 slat (nsec): min=8299, max=41806, avg=9104.41, stdev=995.50 00:11:04.907 clat (usec): min=80, max=225, avg=137.49, stdev=20.14 00:11:04.907 lat (usec): min=89, max=234, avg=146.60, stdev=20.22 00:11:04.907 clat percentiles (usec): 00:11:04.907 | 1.00th=[ 109], 5.00th=[ 114], 10.00th=[ 117], 20.00th=[ 120], 00:11:04.907 | 30.00th=[ 122], 40.00th=[ 126], 50.00th=[ 135], 60.00th=[ 147], 00:11:04.907 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 165], 00:11:04.907 | 99.00th=[ 206], 99.50th=[ 215], 99.90th=[ 223], 99.95th=[ 227], 00:11:04.907 | 99.99th=[ 227] 00:11:04.907 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:11:04.907 slat (nsec): min=10143, max=54493, avg=11392.88, stdev=1227.40 00:11:04.907 clat (usec): min=66, max=235, avg=128.37, stdev=19.66 00:11:04.907 lat (usec): min=78, max=249, avg=139.76, stdev=19.72 00:11:04.907 clat percentiles (usec): 00:11:04.907 | 1.00th=[ 99], 5.00th=[ 106], 10.00th=[ 109], 20.00th=[ 113], 00:11:04.907 | 30.00th=[ 115], 40.00th=[ 118], 50.00th=[ 121], 60.00th=[ 135], 00:11:04.907 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 159], 00:11:04.907 | 99.00th=[ 196], 99.50th=[ 200], 99.90th=[ 208], 99.95th=[ 219], 00:11:04.907 | 99.99th=[ 237] 00:11:04.907 bw ( KiB/s): min=12288, max=12288, per=19.65%, avg=12288.00, stdev= 0.00, samples=1 00:11:04.907 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:04.907 lat (usec) : 100=0.77%, 250=99.23% 00:11:04.907 cpu : usr=4.40%, sys=7.40%, ctx=6875, majf=0, minf=1 00:11:04.907 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:04.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.907 issued rwts: total=3291,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.907 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:04.907 job2: (groupid=0, jobs=1): err= 0: pid=195263: Fri Dec 13 19:05:38 2024 00:11:04.907 read: IOPS=3352, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1001msec) 00:11:04.907 slat (nsec): min=8529, max=18854, avg=9228.64, stdev=740.22 00:11:04.907 clat (usec): min=78, max=209, avg=134.76, stdev=19.97 00:11:04.907 lat (usec): min=87, max=219, avg=143.99, stdev=19.98 00:11:04.907 clat percentiles (usec): 00:11:04.907 | 1.00th=[ 102], 5.00th=[ 112], 10.00th=[ 114], 20.00th=[ 117], 00:11:04.907 | 30.00th=[ 119], 40.00th=[ 122], 50.00th=[ 128], 60.00th=[ 147], 00:11:04.907 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 165], 00:11:04.907 | 99.00th=[ 178], 99.50th=[ 188], 99.90th=[ 200], 99.95th=[ 208], 00:11:04.907 | 99.99th=[ 210] 00:11:04.907 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:11:04.907 slat (nsec): min=10245, max=45607, avg=11414.79, stdev=1477.51 00:11:04.907 clat (usec): min=73, max=195, avg=128.05, stdev=17.13 00:11:04.907 lat (usec): min=84, max=228, avg=139.46, stdev=16.98 00:11:04.907 clat percentiles (usec): 00:11:04.907 | 1.00th=[ 90], 5.00th=[ 110], 10.00th=[ 112], 20.00th=[ 114], 00:11:04.907 | 30.00th=[ 117], 40.00th=[ 119], 50.00th=[ 122], 60.00th=[ 133], 00:11:04.907 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 155], 00:11:04.907 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 192], 99.95th=[ 194], 00:11:04.907 | 99.99th=[ 196] 00:11:04.907 bw ( KiB/s): min=12680, max=12680, per=20.28%, avg=12680.00, stdev= 0.00, samples=1 00:11:04.907 iops : min= 3170, max= 3170, avg=3170.00, stdev= 0.00, samples=1 00:11:04.907 lat (usec) : 100=1.10%, 250=98.90% 00:11:04.907 cpu : usr=5.40%, sys=9.00%, ctx=6940, majf=0, minf=1 00:11:04.907 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:04.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.907 issued rwts: total=3356,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.907 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:04.907 job3: (groupid=0, jobs=1): err= 0: pid=195264: Fri Dec 13 19:05:38 2024 00:11:04.907 read: IOPS=3434, BW=13.4MiB/s (14.1MB/s)(13.4MiB/1001msec) 00:11:04.907 slat (nsec): min=8416, max=21404, avg=9159.02, stdev=756.64 00:11:04.907 clat (usec): min=74, max=203, avg=133.19, stdev=21.35 00:11:04.907 lat (usec): min=83, max=212, avg=142.35, stdev=21.36 00:11:04.907 clat percentiles (usec): 00:11:04.907 | 1.00th=[ 82], 5.00th=[ 92], 10.00th=[ 114], 20.00th=[ 118], 00:11:04.907 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 128], 60.00th=[ 143], 00:11:04.907 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 159], 95.00th=[ 163], 00:11:04.907 | 99.00th=[ 176], 99.50th=[ 188], 99.90th=[ 200], 99.95th=[ 204], 00:11:04.907 | 99.99th=[ 204] 00:11:04.907 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:11:04.908 slat (nsec): min=7487, max=37640, avg=11104.60, stdev=1387.34 00:11:04.908 clat (usec): min=75, max=202, avg=126.79, stdev=17.82 00:11:04.908 lat (usec): min=86, max=212, avg=137.90, stdev=17.69 00:11:04.908 clat percentiles (usec): 00:11:04.908 | 1.00th=[ 90], 5.00th=[ 106], 10.00th=[ 109], 20.00th=[ 113], 00:11:04.908 | 30.00th=[ 115], 40.00th=[ 117], 50.00th=[ 121], 60.00th=[ 133], 00:11:04.908 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 155], 00:11:04.908 | 99.00th=[ 172], 99.50th=[ 176], 99.90th=[ 192], 99.95th=[ 196], 00:11:04.908 | 99.99th=[ 202] 00:11:04.908 bw ( KiB/s): min=12744, max=12744, per=20.38%, avg=12744.00, stdev= 0.00, samples=1 00:11:04.908 iops : min= 3186, max= 3186, avg=3186.00, stdev= 0.00, samples=1 00:11:04.908 lat (usec) : 100=4.10%, 250=95.90% 00:11:04.908 cpu : usr=5.70%, sys=9.10%, ctx=7022, majf=0, minf=2 00:11:04.908 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:04.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.908 issued rwts: total=3438,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.908 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:04.908 00:11:04.908 Run status group 0 (all jobs): 00:11:04.908 READ: bw=57.3MiB/s (60.1MB/s), 12.8MiB/s-18.0MiB/s (13.5MB/s-18.9MB/s), io=57.4MiB (60.2MB), run=1001-1001msec 00:11:04.908 WRITE: bw=61.1MiB/s (64.0MB/s), 14.0MiB/s-19.1MiB/s (14.7MB/s-20.0MB/s), io=61.1MiB (64.1MB), run=1001-1001msec 00:11:04.908 00:11:04.908 Disk stats (read/write): 00:11:04.908 nvme0n1: ios=4145/4153, merge=0/0, ticks=351/319, in_queue=670, util=84.47% 00:11:04.908 nvme0n2: ios=2560/3067, merge=0/0, ticks=351/383, in_queue=734, util=85.20% 00:11:04.908 nvme0n3: ios=2614/3072, merge=0/0, ticks=344/366, in_queue=710, util=88.45% 00:11:04.908 nvme0n4: ios=2631/3072, merge=0/0, ticks=352/363, in_queue=715, util=89.50% 00:11:04.908 19:05:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:04.908 [global] 00:11:04.908 thread=1 00:11:04.908 invalidate=1 00:11:04.908 rw=write 00:11:04.908 time_based=1 00:11:04.908 runtime=1 00:11:04.908 ioengine=libaio 00:11:04.908 direct=1 00:11:04.908 bs=4096 00:11:04.908 iodepth=128 00:11:04.908 norandommap=0 00:11:04.908 numjobs=1 00:11:04.908 00:11:04.908 verify_dump=1 00:11:04.908 verify_backlog=512 00:11:04.908 verify_state_save=0 00:11:04.908 do_verify=1 00:11:04.908 verify=crc32c-intel 00:11:04.908 [job0] 00:11:04.908 filename=/dev/nvme0n1 00:11:04.908 [job1] 00:11:04.908 filename=/dev/nvme0n2 00:11:04.908 [job2] 00:11:04.908 filename=/dev/nvme0n3 00:11:04.908 [job3] 00:11:04.908 filename=/dev/nvme0n4 00:11:04.908 Could not set queue depth (nvme0n1) 00:11:04.908 Could not set queue depth (nvme0n2) 00:11:04.908 Could not set queue depth (nvme0n3) 00:11:04.908 Could not set queue depth (nvme0n4) 00:11:05.165 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:05.165 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:05.165 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:05.165 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:05.165 fio-3.35 00:11:05.165 Starting 4 threads 00:11:06.570 00:11:06.570 job0: (groupid=0, jobs=1): err= 0: pid=195681: Fri Dec 13 19:05:40 2024 00:11:06.570 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:11:06.570 slat (usec): min=2, max=3303, avg=132.14, stdev=404.43 00:11:06.570 clat (usec): min=4719, max=22268, avg=17171.27, stdev=4337.59 00:11:06.570 lat (usec): min=4722, max=22271, avg=17303.41, stdev=4351.95 00:11:06.570 clat percentiles (usec): 00:11:06.570 | 1.00th=[ 5276], 5.00th=[ 5735], 10.00th=[ 6194], 20.00th=[18220], 00:11:06.570 | 30.00th=[18482], 40.00th=[18744], 50.00th=[18744], 60.00th=[19006], 00:11:06.570 | 70.00th=[19268], 80.00th=[19268], 90.00th=[19530], 95.00th=[19530], 00:11:06.570 | 99.00th=[20317], 99.50th=[21103], 99.90th=[22152], 99.95th=[22152], 00:11:06.570 | 99.99th=[22152] 00:11:06.570 write: IOPS=3597, BW=14.1MiB/s (14.7MB/s)(14.1MiB/1004msec); 0 zone resets 00:11:06.570 slat (usec): min=2, max=4470, avg=141.52, stdev=403.92 00:11:06.570 clat (usec): min=3061, max=20601, avg=18045.94, stdev=1603.06 00:11:06.570 lat (usec): min=6395, max=20611, avg=18187.46, stdev=1556.25 00:11:06.570 clat percentiles (usec): 00:11:06.570 | 1.00th=[ 9634], 5.00th=[15926], 10.00th=[17433], 20.00th=[17695], 00:11:06.570 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18220], 60.00th=[18482], 00:11:06.570 | 70.00th=[18482], 80.00th=[19006], 90.00th=[19006], 95.00th=[19268], 00:11:06.570 | 99.00th=[20055], 99.50th=[20055], 99.90th=[20317], 99.95th=[20579], 00:11:06.570 | 99.99th=[20579] 00:11:06.570 bw ( KiB/s): min=14272, max=14400, per=15.95%, avg=14336.00, stdev=90.51, samples=2 00:11:06.570 iops : min= 3568, max= 3600, avg=3584.00, stdev=22.63, samples=2 00:11:06.570 lat (msec) : 4=0.01%, 10=6.66%, 20=92.09%, 50=1.24% 00:11:06.570 cpu : usr=1.89%, sys=4.19%, ctx=2608, majf=0, minf=1 00:11:06.570 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:06.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:06.570 issued rwts: total=3584,3612,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.570 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:06.570 job1: (groupid=0, jobs=1): err= 0: pid=195682: Fri Dec 13 19:05:40 2024 00:11:06.570 read: IOPS=11.4k, BW=44.5MiB/s (46.7MB/s)(44.6MiB/1002msec) 00:11:06.570 slat (nsec): min=1992, max=5052.6k, avg=42457.51, stdev=164442.63 00:11:06.570 clat (usec): min=832, max=21097, avg=5635.36, stdev=1416.83 00:11:06.570 lat (usec): min=1378, max=21101, avg=5677.82, stdev=1420.46 00:11:06.570 clat percentiles (usec): 00:11:06.570 | 1.00th=[ 4555], 5.00th=[ 4948], 10.00th=[ 5080], 20.00th=[ 5276], 00:11:06.570 | 30.00th=[ 5407], 40.00th=[ 5473], 50.00th=[ 5473], 60.00th=[ 5538], 00:11:06.570 | 70.00th=[ 5604], 80.00th=[ 5669], 90.00th=[ 5800], 95.00th=[ 5997], 00:11:06.570 | 99.00th=[14353], 99.50th=[17957], 99.90th=[20055], 99.95th=[21103], 00:11:06.570 | 99.99th=[21103] 00:11:06.570 write: IOPS=11.8k, BW=45.9MiB/s (48.1MB/s)(46.0MiB/1002msec); 0 zone resets 00:11:06.570 slat (usec): min=2, max=2493, avg=40.05, stdev=139.24 00:11:06.570 clat (usec): min=2687, max=18722, avg=5274.88, stdev=1175.02 00:11:06.570 lat (usec): min=2752, max=18737, avg=5314.93, stdev=1178.49 00:11:06.570 clat percentiles (usec): 00:11:06.570 | 1.00th=[ 4424], 5.00th=[ 4686], 10.00th=[ 4817], 20.00th=[ 5014], 00:11:06.570 | 30.00th=[ 5080], 40.00th=[ 5145], 50.00th=[ 5145], 60.00th=[ 5211], 00:11:06.570 | 70.00th=[ 5276], 80.00th=[ 5342], 90.00th=[ 5473], 95.00th=[ 5604], 00:11:06.570 | 99.00th=[12649], 99.50th=[16450], 99.90th=[17433], 99.95th=[17695], 00:11:06.570 | 99.99th=[18220] 00:11:06.570 bw ( KiB/s): min=47088, max=47120, per=52.42%, avg=47104.00, stdev=22.63, samples=2 00:11:06.570 iops : min=11772, max=11780, avg=11776.00, stdev= 5.66, samples=2 00:11:06.570 lat (usec) : 1000=0.01% 00:11:06.570 lat (msec) : 2=0.06%, 4=0.35%, 10=98.13%, 20=1.38%, 50=0.06% 00:11:06.570 cpu : usr=5.19%, sys=9.79%, ctx=1608, majf=0, minf=1 00:11:06.570 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:11:06.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:06.570 issued rwts: total=11424,11776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.570 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:06.570 job2: (groupid=0, jobs=1): err= 0: pid=195683: Fri Dec 13 19:05:40 2024 00:11:06.570 read: IOPS=3255, BW=12.7MiB/s (13.3MB/s)(12.8MiB/1004msec) 00:11:06.570 slat (usec): min=2, max=2147, avg=148.03, stdev=351.24 00:11:06.570 clat (usec): min=3062, max=22212, avg=18756.38, stdev=1782.09 00:11:06.570 lat (usec): min=3828, max=22215, avg=18904.42, stdev=1752.92 00:11:06.571 clat percentiles (usec): 00:11:06.571 | 1.00th=[ 7963], 5.00th=[17695], 10.00th=[18220], 20.00th=[18482], 00:11:06.571 | 30.00th=[18744], 40.00th=[19006], 50.00th=[19006], 60.00th=[19268], 00:11:06.571 | 70.00th=[19268], 80.00th=[19530], 90.00th=[19530], 95.00th=[19792], 00:11:06.571 | 99.00th=[20317], 99.50th=[20579], 99.90th=[20841], 99.95th=[21103], 00:11:06.571 | 99.99th=[22152] 00:11:06.571 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:11:06.571 slat (usec): min=2, max=1778, avg=139.85, stdev=328.23 00:11:06.571 clat (usec): min=13402, max=20796, avg=18243.43, stdev=662.80 00:11:06.571 lat (usec): min=13447, max=20801, avg=18383.29, stdev=587.02 00:11:06.571 clat percentiles (usec): 00:11:06.571 | 1.00th=[16319], 5.00th=[17171], 10.00th=[17433], 20.00th=[17957], 00:11:06.571 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18220], 60.00th=[18482], 00:11:06.571 | 70.00th=[18482], 80.00th=[18744], 90.00th=[19006], 95.00th=[19268], 00:11:06.571 | 99.00th=[20055], 99.50th=[20317], 99.90th=[20579], 99.95th=[20841], 00:11:06.571 | 99.99th=[20841] 00:11:06.571 bw ( KiB/s): min=14240, max=14432, per=15.95%, avg=14336.00, stdev=135.76, samples=2 00:11:06.571 iops : min= 3560, max= 3608, avg=3584.00, stdev=33.94, samples=2 00:11:06.571 lat (msec) : 4=0.13%, 10=0.60%, 20=98.06%, 50=1.21% 00:11:06.571 cpu : usr=2.19%, sys=3.89%, ctx=2361, majf=0, minf=1 00:11:06.571 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:06.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.571 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:06.571 issued rwts: total=3269,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.571 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:06.571 job3: (groupid=0, jobs=1): err= 0: pid=195684: Fri Dec 13 19:05:40 2024 00:11:06.571 read: IOPS=3286, BW=12.8MiB/s (13.5MB/s)(12.9MiB/1004msec) 00:11:06.571 slat (usec): min=2, max=2469, avg=146.51, stdev=359.03 00:11:06.571 clat (usec): min=3060, max=21334, avg=18752.92, stdev=1829.82 00:11:06.571 lat (usec): min=3855, max=21338, avg=18899.43, stdev=1800.82 00:11:06.571 clat percentiles (usec): 00:11:06.571 | 1.00th=[ 7963], 5.00th=[17171], 10.00th=[18220], 20.00th=[18744], 00:11:06.571 | 30.00th=[18744], 40.00th=[19006], 50.00th=[19006], 60.00th=[19268], 00:11:06.571 | 70.00th=[19268], 80.00th=[19530], 90.00th=[19530], 95.00th=[19792], 00:11:06.571 | 99.00th=[20841], 99.50th=[20841], 99.90th=[21103], 99.95th=[21103], 00:11:06.571 | 99.99th=[21365] 00:11:06.571 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:11:06.571 slat (usec): min=2, max=2476, avg=139.95, stdev=341.16 00:11:06.571 clat (usec): min=12774, max=20588, avg=18082.01, stdev=879.05 00:11:06.571 lat (usec): min=13528, max=20598, avg=18221.96, stdev=821.17 00:11:06.571 clat percentiles (usec): 00:11:06.571 | 1.00th=[13960], 5.00th=[16712], 10.00th=[17433], 20.00th=[17695], 00:11:06.571 | 30.00th=[17957], 40.00th=[17957], 50.00th=[18220], 60.00th=[18482], 00:11:06.571 | 70.00th=[18482], 80.00th=[18744], 90.00th=[18744], 95.00th=[19006], 00:11:06.571 | 99.00th=[19530], 99.50th=[19792], 99.90th=[20317], 99.95th=[20579], 00:11:06.571 | 99.99th=[20579] 00:11:06.571 bw ( KiB/s): min=14272, max=14400, per=15.95%, avg=14336.00, stdev=90.51, samples=2 00:11:06.571 iops : min= 3568, max= 3600, avg=3584.00, stdev=22.63, samples=2 00:11:06.571 lat (msec) : 4=0.12%, 10=0.64%, 20=98.10%, 50=1.15% 00:11:06.571 cpu : usr=1.99%, sys=4.09%, ctx=2327, majf=0, minf=1 00:11:06.571 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:06.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.571 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:06.571 issued rwts: total=3300,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.571 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:06.571 00:11:06.571 Run status group 0 (all jobs): 00:11:06.571 READ: bw=83.9MiB/s (88.0MB/s), 12.7MiB/s-44.5MiB/s (13.3MB/s-46.7MB/s), io=84.3MiB (88.4MB), run=1002-1004msec 00:11:06.571 WRITE: bw=87.8MiB/s (92.0MB/s), 13.9MiB/s-45.9MiB/s (14.6MB/s-48.1MB/s), io=88.1MiB (92.4MB), run=1002-1004msec 00:11:06.571 00:11:06.571 Disk stats (read/write): 00:11:06.571 nvme0n1: ios=2988/3072, merge=0/0, ticks=12315/13951, in_queue=26266, util=84.25% 00:11:06.571 nvme0n2: ios=9353/9728, merge=0/0, ticks=14841/14385, in_queue=29226, util=85.13% 00:11:06.571 nvme0n3: ios=2579/3072, merge=0/0, ticks=12392/13908, in_queue=26300, util=88.31% 00:11:06.571 nvme0n4: ios=2608/3072, merge=0/0, ticks=12415/13911, in_queue=26326, util=89.45% 00:11:06.571 19:05:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:06.571 [global] 00:11:06.571 thread=1 00:11:06.571 invalidate=1 00:11:06.571 rw=randwrite 00:11:06.571 time_based=1 00:11:06.571 runtime=1 00:11:06.571 ioengine=libaio 00:11:06.571 direct=1 00:11:06.571 bs=4096 00:11:06.571 iodepth=128 00:11:06.571 norandommap=0 00:11:06.571 numjobs=1 00:11:06.571 00:11:06.571 verify_dump=1 00:11:06.571 verify_backlog=512 00:11:06.571 verify_state_save=0 00:11:06.571 do_verify=1 00:11:06.571 verify=crc32c-intel 00:11:06.571 [job0] 00:11:06.571 filename=/dev/nvme0n1 00:11:06.571 [job1] 00:11:06.571 filename=/dev/nvme0n2 00:11:06.571 [job2] 00:11:06.571 filename=/dev/nvme0n3 00:11:06.571 [job3] 00:11:06.571 filename=/dev/nvme0n4 00:11:06.571 Could not set queue depth (nvme0n1) 00:11:06.571 Could not set queue depth (nvme0n2) 00:11:06.571 Could not set queue depth (nvme0n3) 00:11:06.571 Could not set queue depth (nvme0n4) 00:11:06.830 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:06.830 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:06.830 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:06.830 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:06.830 fio-3.35 00:11:06.830 Starting 4 threads 00:11:08.209 00:11:08.209 job0: (groupid=0, jobs=1): err= 0: pid=196108: Fri Dec 13 19:05:42 2024 00:11:08.209 read: IOPS=4189, BW=16.4MiB/s (17.2MB/s)(16.4MiB/1004msec) 00:11:08.209 slat (usec): min=2, max=2949, avg=116.08, stdev=318.98 00:11:08.209 clat (usec): min=2374, max=22413, avg=14732.51, stdev=2897.18 00:11:08.209 lat (usec): min=3949, max=22440, avg=14848.59, stdev=2903.42 00:11:08.209 clat percentiles (usec): 00:11:08.209 | 1.00th=[ 9896], 5.00th=[12518], 10.00th=[12780], 20.00th=[13042], 00:11:08.209 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13435], 60.00th=[13566], 00:11:08.209 | 70.00th=[13698], 80.00th=[19006], 90.00th=[19792], 95.00th=[20055], 00:11:08.209 | 99.00th=[20579], 99.50th=[20841], 99.90th=[22152], 99.95th=[22152], 00:11:08.209 | 99.99th=[22414] 00:11:08.209 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:11:08.209 slat (usec): min=2, max=3219, avg=107.77, stdev=309.99 00:11:08.209 clat (usec): min=9720, max=21625, avg=14103.69, stdev=2838.05 00:11:08.209 lat (usec): min=9731, max=21997, avg=14211.47, stdev=2845.50 00:11:08.209 clat percentiles (usec): 00:11:08.209 | 1.00th=[11338], 5.00th=[11731], 10.00th=[11994], 20.00th=[12256], 00:11:08.209 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12518], 60.00th=[12780], 00:11:08.209 | 70.00th=[13042], 80.00th=[18482], 90.00th=[19006], 95.00th=[19268], 00:11:08.209 | 99.00th=[20055], 99.50th=[20317], 99.90th=[21103], 99.95th=[21103], 00:11:08.209 | 99.99th=[21627] 00:11:08.209 bw ( KiB/s): min=16240, max=20480, per=18.63%, avg=18360.00, stdev=2998.13, samples=2 00:11:08.209 iops : min= 4060, max= 5120, avg=4590.00, stdev=749.53, samples=2 00:11:08.209 lat (msec) : 4=0.06%, 10=0.51%, 20=96.41%, 50=3.02% 00:11:08.209 cpu : usr=1.00%, sys=4.89%, ctx=1637, majf=0, minf=1 00:11:08.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:08.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.209 issued rwts: total=4206,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.209 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.209 job1: (groupid=0, jobs=1): err= 0: pid=196109: Fri Dec 13 19:05:42 2024 00:11:08.209 read: IOPS=4183, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1004msec) 00:11:08.209 slat (usec): min=2, max=3213, avg=116.06, stdev=332.65 00:11:08.209 clat (usec): min=2326, max=21734, avg=14729.39, stdev=2936.80 00:11:08.209 lat (usec): min=3870, max=22931, avg=14845.45, stdev=2942.77 00:11:08.209 clat percentiles (usec): 00:11:08.209 | 1.00th=[ 9110], 5.00th=[12387], 10.00th=[12649], 20.00th=[13042], 00:11:08.209 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13435], 60.00th=[13566], 00:11:08.209 | 70.00th=[13698], 80.00th=[19268], 90.00th=[19792], 95.00th=[20055], 00:11:08.209 | 99.00th=[20579], 99.50th=[20841], 99.90th=[21103], 99.95th=[21627], 00:11:08.209 | 99.99th=[21627] 00:11:08.209 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:11:08.209 slat (usec): min=2, max=3109, avg=107.86, stdev=298.01 00:11:08.209 clat (usec): min=9671, max=21607, avg=14109.23, stdev=2804.11 00:11:08.209 lat (usec): min=9717, max=22051, avg=14217.09, stdev=2812.66 00:11:08.209 clat percentiles (usec): 00:11:08.209 | 1.00th=[11338], 5.00th=[11731], 10.00th=[11994], 20.00th=[12256], 00:11:08.209 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:11:08.209 | 70.00th=[13173], 80.00th=[18220], 90.00th=[19006], 95.00th=[19268], 00:11:08.209 | 99.00th=[20055], 99.50th=[20055], 99.90th=[21103], 99.95th=[21103], 00:11:08.209 | 99.99th=[21627] 00:11:08.209 bw ( KiB/s): min=16192, max=20480, per=18.60%, avg=18336.00, stdev=3032.07, samples=2 00:11:08.209 iops : min= 4048, max= 5120, avg=4584.00, stdev=758.02, samples=2 00:11:08.209 lat (msec) : 4=0.09%, 10=0.50%, 20=95.69%, 50=3.72% 00:11:08.209 cpu : usr=1.79%, sys=4.39%, ctx=1660, majf=0, minf=1 00:11:08.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:08.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.209 issued rwts: total=4200,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.209 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.209 job2: (groupid=0, jobs=1): err= 0: pid=196110: Fri Dec 13 19:05:42 2024 00:11:08.209 read: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec) 00:11:08.209 slat (usec): min=2, max=3897, avg=74.62, stdev=307.28 00:11:08.209 clat (usec): min=6362, max=19301, avg=9758.82, stdev=2865.18 00:11:08.209 lat (usec): min=6374, max=19334, avg=9833.44, stdev=2893.69 00:11:08.209 clat percentiles (usec): 00:11:08.209 | 1.00th=[ 7373], 5.00th=[ 7504], 10.00th=[ 7701], 20.00th=[ 7832], 00:11:08.209 | 30.00th=[ 8029], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8717], 00:11:08.209 | 70.00th=[ 8848], 80.00th=[14353], 90.00th=[14877], 95.00th=[15270], 00:11:08.209 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18482], 99.95th=[18482], 00:11:08.209 | 99.99th=[19268] 00:11:08.209 write: IOPS=6668, BW=26.0MiB/s (27.3MB/s)(26.2MiB/1004msec); 0 zone resets 00:11:08.209 slat (usec): min=2, max=3880, avg=71.46, stdev=291.81 00:11:08.209 clat (usec): min=3128, max=18182, avg=9276.42, stdev=2735.43 00:11:08.209 lat (usec): min=3733, max=18187, avg=9347.88, stdev=2763.27 00:11:08.209 clat percentiles (usec): 00:11:08.209 | 1.00th=[ 7046], 5.00th=[ 7308], 10.00th=[ 7373], 20.00th=[ 7504], 00:11:08.209 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8291], 00:11:08.209 | 70.00th=[ 8455], 80.00th=[13435], 90.00th=[14222], 95.00th=[14615], 00:11:08.209 | 99.00th=[15926], 99.50th=[17171], 99.90th=[18220], 99.95th=[18220], 00:11:08.209 | 99.99th=[18220] 00:11:08.209 bw ( KiB/s): min=20480, max=32768, per=27.01%, avg=26624.00, stdev=8688.93, samples=2 00:11:08.209 iops : min= 5120, max= 8192, avg=6656.00, stdev=2172.23, samples=2 00:11:08.209 lat (msec) : 4=0.09%, 10=77.17%, 20=22.74% 00:11:08.209 cpu : usr=3.69%, sys=4.99%, ctx=952, majf=0, minf=1 00:11:08.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:08.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.209 issued rwts: total=6656,6695,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.209 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.209 job3: (groupid=0, jobs=1): err= 0: pid=196111: Fri Dec 13 19:05:42 2024 00:11:08.209 read: IOPS=8686, BW=33.9MiB/s (35.6MB/s)(34.0MiB/1002msec) 00:11:08.209 slat (usec): min=2, max=1456, avg=56.51, stdev=214.36 00:11:08.209 clat (usec): min=5298, max=8524, avg=7413.37, stdev=777.12 00:11:08.209 lat (usec): min=6072, max=8526, avg=7469.87, stdev=753.78 00:11:08.209 clat percentiles (usec): 00:11:08.209 | 1.00th=[ 5800], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 6587], 00:11:08.209 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 7767], 60.00th=[ 7963], 00:11:08.209 | 70.00th=[ 8094], 80.00th=[ 8160], 90.00th=[ 8225], 95.00th=[ 8356], 00:11:08.209 | 99.00th=[ 8455], 99.50th=[ 8455], 99.90th=[ 8586], 99.95th=[ 8586], 00:11:08.209 | 99.99th=[ 8586] 00:11:08.209 write: IOPS=8812, BW=34.4MiB/s (36.1MB/s)(34.5MiB/1002msec); 0 zone resets 00:11:08.209 slat (usec): min=2, max=2230, avg=54.08, stdev=205.70 00:11:08.209 clat (usec): min=909, max=9214, avg=7053.83, stdev=855.75 00:11:08.209 lat (usec): min=1683, max=9219, avg=7107.92, stdev=836.97 00:11:08.209 clat percentiles (usec): 00:11:08.209 | 1.00th=[ 5145], 5.00th=[ 5997], 10.00th=[ 6063], 20.00th=[ 6194], 00:11:08.210 | 30.00th=[ 6325], 40.00th=[ 6718], 50.00th=[ 7504], 60.00th=[ 7635], 00:11:08.210 | 70.00th=[ 7701], 80.00th=[ 7767], 90.00th=[ 7832], 95.00th=[ 8029], 00:11:08.210 | 99.00th=[ 8225], 99.50th=[ 8225], 99.90th=[ 8848], 99.95th=[ 8979], 00:11:08.210 | 99.99th=[ 9241] 00:11:08.210 bw ( KiB/s): min=32768, max=36864, per=35.32%, avg=34816.00, stdev=2896.31, samples=2 00:11:08.210 iops : min= 8192, max= 9216, avg=8704.00, stdev=724.08, samples=2 00:11:08.210 lat (usec) : 1000=0.01% 00:11:08.210 lat (msec) : 2=0.07%, 4=0.27%, 10=99.65% 00:11:08.210 cpu : usr=4.30%, sys=6.39%, ctx=1101, majf=0, minf=2 00:11:08.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:08.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.210 issued rwts: total=8704,8830,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.210 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.210 00:11:08.210 Run status group 0 (all jobs): 00:11:08.210 READ: bw=92.5MiB/s (97.0MB/s), 16.3MiB/s-33.9MiB/s (17.1MB/s-35.6MB/s), io=92.8MiB (97.3MB), run=1002-1004msec 00:11:08.210 WRITE: bw=96.3MiB/s (101MB/s), 17.9MiB/s-34.4MiB/s (18.8MB/s-36.1MB/s), io=96.6MiB (101MB), run=1002-1004msec 00:11:08.210 00:11:08.210 Disk stats (read/write): 00:11:08.210 nvme0n1: ios=3645/4096, merge=0/0, ticks=12765/13636, in_queue=26401, util=84.47% 00:11:08.210 nvme0n2: ios=3592/4096, merge=0/0, ticks=12762/13627, in_queue=26389, util=85.22% 00:11:08.210 nvme0n3: ios=5736/6144, merge=0/0, ticks=13511/14322, in_queue=27833, util=88.37% 00:11:08.210 nvme0n4: ios=7032/7168, merge=0/0, ticks=17301/16798, in_queue=34099, util=89.51% 00:11:08.210 19:05:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:08.210 19:05:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=196376 00:11:08.210 19:05:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:08.210 19:05:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:08.210 [global] 00:11:08.210 thread=1 00:11:08.210 invalidate=1 00:11:08.210 rw=read 00:11:08.210 time_based=1 00:11:08.210 runtime=10 00:11:08.210 ioengine=libaio 00:11:08.210 direct=1 00:11:08.210 bs=4096 00:11:08.210 iodepth=1 00:11:08.210 norandommap=1 00:11:08.210 numjobs=1 00:11:08.210 00:11:08.210 [job0] 00:11:08.210 filename=/dev/nvme0n1 00:11:08.210 [job1] 00:11:08.210 filename=/dev/nvme0n2 00:11:08.210 [job2] 00:11:08.210 filename=/dev/nvme0n3 00:11:08.210 [job3] 00:11:08.210 filename=/dev/nvme0n4 00:11:08.210 Could not set queue depth (nvme0n1) 00:11:08.210 Could not set queue depth (nvme0n2) 00:11:08.210 Could not set queue depth (nvme0n3) 00:11:08.210 Could not set queue depth (nvme0n4) 00:11:08.469 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:08.469 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:08.469 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:08.469 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:08.469 fio-3.35 00:11:08.469 Starting 4 threads 00:11:11.012 19:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:11.272 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=74100736, buflen=4096 00:11:11.272 fio: pid=196543, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:11.272 19:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:11.272 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=89415680, buflen=4096 00:11:11.272 fio: pid=196542, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:11.272 19:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:11.272 19:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:11.532 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=61628416, buflen=4096 00:11:11.532 fio: pid=196534, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:11.532 19:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:11.532 19:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:11.793 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1593344, buflen=4096 00:11:11.793 fio: pid=196536, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:11.793 19:05:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:11.793 19:05:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:11.793 00:11:11.793 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=196534: Fri Dec 13 19:05:46 2024 00:11:11.793 read: IOPS=10.4k, BW=40.5MiB/s (42.5MB/s)(123MiB/3032msec) 00:11:11.793 slat (usec): min=8, max=15962, avg=10.06, stdev=122.37 00:11:11.793 clat (usec): min=51, max=305, avg=84.24, stdev= 6.62 00:11:11.793 lat (usec): min=60, max=16053, avg=94.29, stdev=122.64 00:11:11.793 clat percentiles (usec): 00:11:11.793 | 1.00th=[ 73], 5.00th=[ 76], 10.00th=[ 78], 20.00th=[ 80], 00:11:11.793 | 30.00th=[ 82], 40.00th=[ 83], 50.00th=[ 84], 60.00th=[ 86], 00:11:11.793 | 70.00th=[ 87], 80.00th=[ 89], 90.00th=[ 92], 95.00th=[ 95], 00:11:11.793 | 99.00th=[ 102], 99.50th=[ 105], 99.90th=[ 115], 99.95th=[ 126], 00:11:11.793 | 99.99th=[ 227] 00:11:11.793 bw ( KiB/s): min=42120, max=42288, per=32.84%, avg=42196.80, stdev=71.69, samples=5 00:11:11.793 iops : min=10530, max=10572, avg=10548.80, stdev=17.70, samples=5 00:11:11.793 lat (usec) : 100=98.35%, 250=1.64%, 500=0.01% 00:11:11.793 cpu : usr=5.15%, sys=14.19%, ctx=31436, majf=0, minf=1 00:11:11.793 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.793 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.793 issued rwts: total=31431,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.793 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.793 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=196536: Fri Dec 13 19:05:46 2024 00:11:11.793 read: IOPS=10.2k, BW=39.8MiB/s (41.7MB/s)(130MiB/3253msec) 00:11:11.793 slat (usec): min=8, max=12775, avg=10.48, stdev=129.02 00:11:11.793 clat (usec): min=43, max=21490, avg=85.44, stdev=167.31 00:11:11.793 lat (usec): min=60, max=21499, avg=95.92, stdev=211.38 00:11:11.793 clat percentiles (usec): 00:11:11.793 | 1.00th=[ 57], 5.00th=[ 61], 10.00th=[ 71], 20.00th=[ 75], 00:11:11.793 | 30.00th=[ 76], 40.00th=[ 78], 50.00th=[ 79], 60.00th=[ 81], 00:11:11.793 | 70.00th=[ 83], 80.00th=[ 86], 90.00th=[ 125], 95.00th=[ 135], 00:11:11.793 | 99.00th=[ 145], 99.50th=[ 149], 99.90th=[ 180], 99.95th=[ 188], 00:11:11.793 | 99.99th=[ 1004] 00:11:11.793 bw ( KiB/s): min=28408, max=45208, per=31.51%, avg=40487.17, stdev=7107.81, samples=6 00:11:11.793 iops : min= 7102, max=11302, avg=10121.67, stdev=1777.07, samples=6 00:11:11.793 lat (usec) : 50=0.01%, 100=86.87%, 250=13.12% 00:11:11.793 lat (msec) : 2=0.01%, 50=0.01% 00:11:11.793 cpu : usr=5.07%, sys=13.87%, ctx=33167, majf=0, minf=2 00:11:11.793 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.793 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.793 issued rwts: total=33158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.793 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.793 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=196542: Fri Dec 13 19:05:46 2024 00:11:11.793 read: IOPS=7733, BW=30.2MiB/s (31.7MB/s)(85.3MiB/2823msec) 00:11:11.793 slat (usec): min=8, max=11906, avg=10.07, stdev=104.51 00:11:11.793 clat (usec): min=56, max=327, avg=117.47, stdev=26.08 00:11:11.793 lat (usec): min=65, max=11989, avg=127.54, stdev=107.46 00:11:11.793 clat percentiles (usec): 00:11:11.793 | 1.00th=[ 79], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 88], 00:11:11.793 | 30.00th=[ 93], 40.00th=[ 111], 50.00th=[ 128], 60.00th=[ 133], 00:11:11.793 | 70.00th=[ 137], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 149], 00:11:11.793 | 99.00th=[ 184], 99.50th=[ 188], 99.90th=[ 196], 99.95th=[ 198], 00:11:11.793 | 99.99th=[ 233] 00:11:11.793 bw ( KiB/s): min=27512, max=40376, per=23.47%, avg=30161.60, stdev=5710.47, samples=5 00:11:11.793 iops : min= 6878, max=10094, avg=7540.40, stdev=1427.62, samples=5 00:11:11.793 lat (usec) : 100=37.53%, 250=62.45%, 500=0.01% 00:11:11.793 cpu : usr=3.19%, sys=11.48%, ctx=21834, majf=0, minf=2 00:11:11.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.794 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.794 issued rwts: total=21831,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.794 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=196543: Fri Dec 13 19:05:46 2024 00:11:11.794 read: IOPS=6879, BW=26.9MiB/s (28.2MB/s)(70.7MiB/2630msec) 00:11:11.794 slat (nsec): min=8320, max=46963, avg=9350.62, stdev=1597.84 00:11:11.794 clat (usec): min=70, max=328, avg=133.51, stdev=14.64 00:11:11.794 lat (usec): min=79, max=337, avg=142.86, stdev=14.55 00:11:11.794 clat percentiles (usec): 00:11:11.794 | 1.00th=[ 93], 5.00th=[ 111], 10.00th=[ 119], 20.00th=[ 126], 00:11:11.794 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:11:11.794 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 153], 00:11:11.794 | 99.00th=[ 182], 99.50th=[ 186], 99.90th=[ 194], 99.95th=[ 196], 00:11:11.794 | 99.99th=[ 221] 00:11:11.794 bw ( KiB/s): min=27528, max=28392, per=21.60%, avg=27756.80, stdev=358.38, samples=5 00:11:11.794 iops : min= 6882, max= 7098, avg=6939.20, stdev=89.59, samples=5 00:11:11.794 lat (usec) : 100=2.95%, 250=97.04%, 500=0.01% 00:11:11.794 cpu : usr=3.65%, sys=9.55%, ctx=18092, majf=0, minf=2 00:11:11.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.794 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.794 issued rwts: total=18092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.794 00:11:11.794 Run status group 0 (all jobs): 00:11:11.794 READ: bw=125MiB/s (132MB/s), 26.9MiB/s-40.5MiB/s (28.2MB/s-42.5MB/s), io=408MiB (428MB), run=2630-3253msec 00:11:11.794 00:11:11.794 Disk stats (read/write): 00:11:11.794 nvme0n1: ios=29689/0, merge=0/0, ticks=2274/0, in_queue=2274, util=94.12% 00:11:11.794 nvme0n2: ios=31067/0, merge=0/0, ticks=2383/0, in_queue=2383, util=94.05% 00:11:11.794 nvme0n3: ios=19657/0, merge=0/0, ticks=2221/0, in_queue=2221, util=96.06% 00:11:11.794 nvme0n4: ios=17916/0, merge=0/0, ticks=2242/0, in_queue=2242, util=96.46% 00:11:12.053 19:05:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:12.053 19:05:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:12.313 19:05:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:12.313 19:05:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:12.574 19:05:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:12.574 19:05:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:12.574 19:05:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:12.574 19:05:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:12.834 19:05:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:12.834 19:05:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 196376 00:11:12.834 19:05:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:12.834 19:05:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:13.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.775 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:13.775 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:13.775 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:13.775 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.775 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.775 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:13.775 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:13.775 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:13.775 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:13.775 nvmf hotplug test: fio failed as expected 00:11:13.775 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.036 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:14.036 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:14.036 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:14.036 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:14.036 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:14.036 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:14.036 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:14.036 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:14.036 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:14.036 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:14.036 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:14.036 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:14.036 rmmod nvme_rdma 00:11:14.036 rmmod nvme_fabrics 00:11:14.036 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:14.036 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:14.036 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:14.036 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 193292 ']' 00:11:14.036 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 193292 00:11:14.036 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 193292 ']' 00:11:14.036 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 193292 00:11:14.036 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:14.036 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.036 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 193292 00:11:14.296 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:14.296 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:14.296 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 193292' 00:11:14.296 killing process with pid 193292 00:11:14.296 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 193292 00:11:14.296 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 193292 00:11:14.296 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:14.296 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:14.296 00:11:14.296 real 0m27.332s 00:11:14.296 user 2m7.953s 00:11:14.296 sys 0m10.838s 00:11:14.296 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.296 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.296 ************************************ 00:11:14.296 END TEST nvmf_fio_target 00:11:14.296 ************************************ 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:14.556 ************************************ 00:11:14.556 START TEST nvmf_bdevio 00:11:14.556 ************************************ 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:11:14.556 * Looking for test storage... 00:11:14.556 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.556 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:14.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.818 --rc genhtml_branch_coverage=1 00:11:14.818 --rc genhtml_function_coverage=1 00:11:14.818 --rc genhtml_legend=1 00:11:14.818 --rc geninfo_all_blocks=1 00:11:14.818 --rc geninfo_unexecuted_blocks=1 00:11:14.818 00:11:14.818 ' 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:14.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.818 --rc genhtml_branch_coverage=1 00:11:14.818 --rc genhtml_function_coverage=1 00:11:14.818 --rc genhtml_legend=1 00:11:14.818 --rc geninfo_all_blocks=1 00:11:14.818 --rc geninfo_unexecuted_blocks=1 00:11:14.818 00:11:14.818 ' 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:14.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.818 --rc genhtml_branch_coverage=1 00:11:14.818 --rc genhtml_function_coverage=1 00:11:14.818 --rc genhtml_legend=1 00:11:14.818 --rc geninfo_all_blocks=1 00:11:14.818 --rc geninfo_unexecuted_blocks=1 00:11:14.818 00:11:14.818 ' 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:14.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.818 --rc genhtml_branch_coverage=1 00:11:14.818 --rc genhtml_function_coverage=1 00:11:14.818 --rc genhtml_legend=1 00:11:14.818 --rc geninfo_all_blocks=1 00:11:14.818 --rc geninfo_unexecuted_blocks=1 00:11:14.818 00:11:14.818 ' 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.818 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:14.819 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:14.819 19:05:48 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:22.983 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:22.983 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:22.983 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:22.984 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:22.984 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # rdma_device_init 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:22.984 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:22.984 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:22.984 altname enp217s0f0np0 00:11:22.984 altname ens818f0np0 00:11:22.984 inet 192.168.100.8/24 scope global mlx_0_0 00:11:22.984 valid_lft forever preferred_lft forever 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:22.984 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:22.984 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:22.984 altname enp217s0f1np1 00:11:22.984 altname ens818f1np1 00:11:22.984 inet 192.168.100.9/24 scope global mlx_0_1 00:11:22.984 valid_lft forever preferred_lft forever 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:22.984 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:22.985 192.168.100.9' 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:22.985 192.168.100.9' 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # head -n 1 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:22.985 192.168.100.9' 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # tail -n +2 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # head -n 1 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=201066 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 201066 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 201066 ']' 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.985 [2024-12-13 19:05:56.377192] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:22.985 [2024-12-13 19:05:56.377249] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.985 [2024-12-13 19:05:56.471850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:22.985 [2024-12-13 19:05:56.493854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.985 [2024-12-13 19:05:56.493890] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.985 [2024-12-13 19:05:56.493900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.985 [2024-12-13 19:05:56.493908] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.985 [2024-12-13 19:05:56.493915] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.985 [2024-12-13 19:05:56.495716] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:11:22.985 [2024-12-13 19:05:56.495807] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:11:22.985 [2024-12-13 19:05:56.495918] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:22.985 [2024-12-13 19:05:56.495919] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.985 [2024-12-13 19:05:56.658915] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1555e40/0x155a2f0) succeed. 00:11:22.985 [2024-12-13 19:05:56.669267] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1557480/0x159b990) succeed. 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.985 Malloc0 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.985 [2024-12-13 19:05:56.847222] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:22.985 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:22.985 { 00:11:22.986 "params": { 00:11:22.986 "name": "Nvme$subsystem", 00:11:22.986 "trtype": "$TEST_TRANSPORT", 00:11:22.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:22.986 "adrfam": "ipv4", 00:11:22.986 "trsvcid": "$NVMF_PORT", 00:11:22.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:22.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:22.986 "hdgst": ${hdgst:-false}, 00:11:22.986 "ddgst": ${ddgst:-false} 00:11:22.986 }, 00:11:22.986 "method": "bdev_nvme_attach_controller" 00:11:22.986 } 00:11:22.986 EOF 00:11:22.986 )") 00:11:22.986 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:22.986 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:22.986 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:22.986 19:05:56 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:22.986 "params": { 00:11:22.986 "name": "Nvme1", 00:11:22.986 "trtype": "rdma", 00:11:22.986 "traddr": "192.168.100.8", 00:11:22.986 "adrfam": "ipv4", 00:11:22.986 "trsvcid": "4420", 00:11:22.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:22.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:22.986 "hdgst": false, 00:11:22.986 "ddgst": false 00:11:22.986 }, 00:11:22.986 "method": "bdev_nvme_attach_controller" 00:11:22.986 }' 00:11:22.986 [2024-12-13 19:05:56.899242] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:22.986 [2024-12-13 19:05:56.899288] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid201089 ] 00:11:22.986 [2024-12-13 19:05:56.992177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:22.986 [2024-12-13 19:05:57.017142] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.986 [2024-12-13 19:05:57.017249] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.986 [2024-12-13 19:05:57.017250] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.986 I/O targets: 00:11:22.986 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:22.986 00:11:22.986 00:11:22.986 CUnit - A unit testing framework for C - Version 2.1-3 00:11:22.986 http://cunit.sourceforge.net/ 00:11:22.986 00:11:22.986 00:11:22.986 Suite: bdevio tests on: Nvme1n1 00:11:22.986 Test: blockdev write read block ...passed 00:11:22.986 Test: blockdev write zeroes read block ...passed 00:11:22.986 Test: blockdev write zeroes read no split ...passed 00:11:22.986 Test: blockdev write zeroes read split ...passed 00:11:22.986 Test: blockdev write zeroes read split partial ...passed 00:11:22.986 Test: blockdev reset ...[2024-12-13 19:05:57.218255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:22.986 [2024-12-13 19:05:57.240871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:11:22.986 [2024-12-13 19:05:57.267870] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:22.986 passed 00:11:22.986 Test: blockdev write read 8 blocks ...passed 00:11:22.986 Test: blockdev write read size > 128k ...passed 00:11:22.986 Test: blockdev write read invalid size ...passed 00:11:22.986 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.986 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.986 Test: blockdev write read max offset ...passed 00:11:22.986 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.986 Test: blockdev writev readv 8 blocks ...passed 00:11:22.986 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.986 Test: blockdev writev readv block ...passed 00:11:22.986 Test: blockdev writev readv size > 128k ...passed 00:11:22.986 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.986 Test: blockdev comparev and writev ...[2024-12-13 19:05:57.271299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.986 [2024-12-13 19:05:57.271327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:22.986 [2024-12-13 19:05:57.271340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.986 [2024-12-13 19:05:57.271351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:22.986 [2024-12-13 19:05:57.271519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.986 [2024-12-13 19:05:57.271530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:22.986 [2024-12-13 19:05:57.271541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.986 [2024-12-13 19:05:57.271551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:22.986 [2024-12-13 19:05:57.271726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.986 [2024-12-13 19:05:57.271738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:22.986 [2024-12-13 19:05:57.271748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.986 [2024-12-13 19:05:57.271757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:22.986 [2024-12-13 19:05:57.271939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.986 [2024-12-13 19:05:57.271950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:22.986 [2024-12-13 19:05:57.271959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.986 [2024-12-13 19:05:57.271969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:22.986 passed 00:11:22.986 Test: blockdev nvme passthru rw ...passed 00:11:22.986 Test: blockdev nvme passthru vendor specific ...[2024-12-13 19:05:57.272322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:22.986 [2024-12-13 19:05:57.272336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:22.986 [2024-12-13 19:05:57.272380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:22.986 [2024-12-13 19:05:57.272391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:22.986 [2024-12-13 19:05:57.272442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:22.986 [2024-12-13 19:05:57.272452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:22.986 [2024-12-13 19:05:57.272492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:22.986 [2024-12-13 19:05:57.272507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:22.986 passed 00:11:22.986 Test: blockdev nvme admin passthru ...passed 00:11:22.986 Test: blockdev copy ...passed 00:11:22.986 00:11:22.986 Run Summary: Type Total Ran Passed Failed Inactive 00:11:22.986 suites 1 1 n/a 0 0 00:11:22.986 tests 23 23 23 0 0 00:11:22.986 asserts 152 152 152 0 n/a 00:11:22.986 00:11:22.986 Elapsed time = 0.172 seconds 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:23.247 rmmod nvme_rdma 00:11:23.247 rmmod nvme_fabrics 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 201066 ']' 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 201066 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 201066 ']' 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 201066 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 201066 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 201066' 00:11:23.247 killing process with pid 201066 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 201066 00:11:23.247 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 201066 00:11:23.508 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:23.508 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:23.508 00:11:23.508 real 0m9.077s 00:11:23.508 user 0m8.254s 00:11:23.508 sys 0m6.246s 00:11:23.508 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.508 19:05:57 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.508 ************************************ 00:11:23.508 END TEST nvmf_bdevio 00:11:23.508 ************************************ 00:11:23.508 19:05:57 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:23.508 00:11:23.508 real 4m14.333s 00:11:23.508 user 10m47.165s 00:11:23.508 sys 1m41.193s 00:11:23.508 19:05:57 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.508 19:05:57 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:23.508 ************************************ 00:11:23.508 END TEST nvmf_target_core 00:11:23.508 ************************************ 00:11:23.769 19:05:57 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:11:23.769 19:05:57 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:23.769 19:05:57 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.769 19:05:57 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:23.769 ************************************ 00:11:23.769 START TEST nvmf_target_extra 00:11:23.769 ************************************ 00:11:23.769 19:05:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:11:23.769 * Looking for test storage... 00:11:23.769 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:23.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.769 --rc genhtml_branch_coverage=1 00:11:23.769 --rc genhtml_function_coverage=1 00:11:23.769 --rc genhtml_legend=1 00:11:23.769 --rc geninfo_all_blocks=1 00:11:23.769 --rc geninfo_unexecuted_blocks=1 00:11:23.769 00:11:23.769 ' 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:23.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.769 --rc genhtml_branch_coverage=1 00:11:23.769 --rc genhtml_function_coverage=1 00:11:23.769 --rc genhtml_legend=1 00:11:23.769 --rc geninfo_all_blocks=1 00:11:23.769 --rc geninfo_unexecuted_blocks=1 00:11:23.769 00:11:23.769 ' 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:23.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.769 --rc genhtml_branch_coverage=1 00:11:23.769 --rc genhtml_function_coverage=1 00:11:23.769 --rc genhtml_legend=1 00:11:23.769 --rc geninfo_all_blocks=1 00:11:23.769 --rc geninfo_unexecuted_blocks=1 00:11:23.769 00:11:23.769 ' 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:23.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.769 --rc genhtml_branch_coverage=1 00:11:23.769 --rc genhtml_function_coverage=1 00:11:23.769 --rc genhtml_legend=1 00:11:23.769 --rc geninfo_all_blocks=1 00:11:23.769 --rc geninfo_unexecuted_blocks=1 00:11:23.769 00:11:23.769 ' 00:11:23.769 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:24.030 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:24.030 ************************************ 00:11:24.030 START TEST nvmf_example 00:11:24.030 ************************************ 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:11:24.030 * Looking for test storage... 00:11:24.030 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:24.030 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:24.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.291 --rc genhtml_branch_coverage=1 00:11:24.291 --rc genhtml_function_coverage=1 00:11:24.291 --rc genhtml_legend=1 00:11:24.291 --rc geninfo_all_blocks=1 00:11:24.291 --rc geninfo_unexecuted_blocks=1 00:11:24.291 00:11:24.291 ' 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:24.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.291 --rc genhtml_branch_coverage=1 00:11:24.291 --rc genhtml_function_coverage=1 00:11:24.291 --rc genhtml_legend=1 00:11:24.291 --rc geninfo_all_blocks=1 00:11:24.291 --rc geninfo_unexecuted_blocks=1 00:11:24.291 00:11:24.291 ' 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:24.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.291 --rc genhtml_branch_coverage=1 00:11:24.291 --rc genhtml_function_coverage=1 00:11:24.291 --rc genhtml_legend=1 00:11:24.291 --rc geninfo_all_blocks=1 00:11:24.291 --rc geninfo_unexecuted_blocks=1 00:11:24.291 00:11:24.291 ' 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:24.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.291 --rc genhtml_branch_coverage=1 00:11:24.291 --rc genhtml_function_coverage=1 00:11:24.291 --rc genhtml_legend=1 00:11:24.291 --rc geninfo_all_blocks=1 00:11:24.291 --rc geninfo_unexecuted_blocks=1 00:11:24.291 00:11:24.291 ' 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.291 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:24.292 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:24.292 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:32.436 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:32.436 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:32.436 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:32.436 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # rdma_device_init 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:32.436 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:32.437 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:32.437 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:32.437 altname enp217s0f0np0 00:11:32.437 altname ens818f0np0 00:11:32.437 inet 192.168.100.8/24 scope global mlx_0_0 00:11:32.437 valid_lft forever preferred_lft forever 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:32.437 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:32.437 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:32.437 altname enp217s0f1np1 00:11:32.437 altname ens818f1np1 00:11:32.437 inet 192.168.100.9/24 scope global mlx_0_1 00:11:32.437 valid_lft forever preferred_lft forever 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:32.437 192.168.100.9' 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:32.437 192.168.100.9' 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # head -n 1 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:32.437 192.168.100.9' 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # tail -n +2 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # head -n 1 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=205385 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 205385 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 205385 ']' 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.437 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.437 19:06:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:32.437 19:06:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:32.437 19:06:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:32.437 19:06:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:32.437 19:06:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.698 19:06:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:32.698 19:06:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.698 19:06:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.698 19:06:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.698 19:06:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:32.698 19:06:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.698 19:06:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.698 19:06:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.698 19:06:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:32.698 19:06:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:32.698 19:06:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.698 19:06:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.698 19:06:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.698 19:06:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:32.698 19:06:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:32.698 19:06:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.698 19:06:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.698 19:06:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.698 19:06:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:32.698 19:06:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.698 19:06:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.698 19:06:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.698 19:06:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:32.698 19:06:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:44.933 Initializing NVMe Controllers 00:11:44.933 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:44.933 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:44.933 Initialization complete. Launching workers. 00:11:44.933 ======================================================== 00:11:44.933 Latency(us) 00:11:44.933 Device Information : IOPS MiB/s Average min max 00:11:44.933 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 24083.46 94.08 2655.89 651.73 13077.85 00:11:44.933 ======================================================== 00:11:44.933 Total : 24083.46 94.08 2655.89 651.73 13077.85 00:11:44.933 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:44.933 rmmod nvme_rdma 00:11:44.933 rmmod nvme_fabrics 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 205385 ']' 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 205385 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 205385 ']' 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 205385 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 205385 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 205385' 00:11:44.933 killing process with pid 205385 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 205385 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 205385 00:11:44.933 nvmf threads initialize successfully 00:11:44.933 bdev subsystem init successfully 00:11:44.933 created a nvmf target service 00:11:44.933 create targets's poll groups done 00:11:44.933 all subsystems of target started 00:11:44.933 nvmf target is running 00:11:44.933 all subsystems of target stopped 00:11:44.933 destroy targets's poll groups done 00:11:44.933 destroyed the nvmf target service 00:11:44.933 bdev subsystem finish successfully 00:11:44.933 nvmf threads destroy successfully 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:44.933 00:11:44.933 real 0m20.494s 00:11:44.933 user 0m52.708s 00:11:44.933 sys 0m6.169s 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:44.933 ************************************ 00:11:44.933 END TEST nvmf_example 00:11:44.933 ************************************ 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:44.933 ************************************ 00:11:44.933 START TEST nvmf_filesystem 00:11:44.933 ************************************ 00:11:44.933 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:11:44.933 * Looking for test storage... 00:11:44.933 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:44.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.934 --rc genhtml_branch_coverage=1 00:11:44.934 --rc genhtml_function_coverage=1 00:11:44.934 --rc genhtml_legend=1 00:11:44.934 --rc geninfo_all_blocks=1 00:11:44.934 --rc geninfo_unexecuted_blocks=1 00:11:44.934 00:11:44.934 ' 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:44.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.934 --rc genhtml_branch_coverage=1 00:11:44.934 --rc genhtml_function_coverage=1 00:11:44.934 --rc genhtml_legend=1 00:11:44.934 --rc geninfo_all_blocks=1 00:11:44.934 --rc geninfo_unexecuted_blocks=1 00:11:44.934 00:11:44.934 ' 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:44.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.934 --rc genhtml_branch_coverage=1 00:11:44.934 --rc genhtml_function_coverage=1 00:11:44.934 --rc genhtml_legend=1 00:11:44.934 --rc geninfo_all_blocks=1 00:11:44.934 --rc geninfo_unexecuted_blocks=1 00:11:44.934 00:11:44.934 ' 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:44.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.934 --rc genhtml_branch_coverage=1 00:11:44.934 --rc genhtml_function_coverage=1 00:11:44.934 --rc genhtml_legend=1 00:11:44.934 --rc geninfo_all_blocks=1 00:11:44.934 --rc geninfo_unexecuted_blocks=1 00:11:44.934 00:11:44.934 ' 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:44.934 19:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:44.934 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:44.935 #define SPDK_CONFIG_H 00:11:44.935 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:44.935 #define SPDK_CONFIG_APPS 1 00:11:44.935 #define SPDK_CONFIG_ARCH native 00:11:44.935 #undef SPDK_CONFIG_ASAN 00:11:44.935 #undef SPDK_CONFIG_AVAHI 00:11:44.935 #undef SPDK_CONFIG_CET 00:11:44.935 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:44.935 #define SPDK_CONFIG_COVERAGE 1 00:11:44.935 #define SPDK_CONFIG_CROSS_PREFIX 00:11:44.935 #undef SPDK_CONFIG_CRYPTO 00:11:44.935 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:44.935 #undef SPDK_CONFIG_CUSTOMOCF 00:11:44.935 #undef SPDK_CONFIG_DAOS 00:11:44.935 #define SPDK_CONFIG_DAOS_DIR 00:11:44.935 #define SPDK_CONFIG_DEBUG 1 00:11:44.935 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:44.935 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:11:44.935 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:11:44.935 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:11:44.935 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:44.935 #undef SPDK_CONFIG_DPDK_UADK 00:11:44.935 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:11:44.935 #define SPDK_CONFIG_EXAMPLES 1 00:11:44.935 #undef SPDK_CONFIG_FC 00:11:44.935 #define SPDK_CONFIG_FC_PATH 00:11:44.935 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:44.935 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:44.935 #define SPDK_CONFIG_FSDEV 1 00:11:44.935 #undef SPDK_CONFIG_FUSE 00:11:44.935 #undef SPDK_CONFIG_FUZZER 00:11:44.935 #define SPDK_CONFIG_FUZZER_LIB 00:11:44.935 #undef SPDK_CONFIG_GOLANG 00:11:44.935 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:44.935 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:44.935 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:44.935 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:44.935 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:44.935 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:44.935 #undef SPDK_CONFIG_HAVE_LZ4 00:11:44.935 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:44.935 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:44.935 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:44.935 #define SPDK_CONFIG_IDXD 1 00:11:44.935 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:44.935 #undef SPDK_CONFIG_IPSEC_MB 00:11:44.935 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:44.935 #define SPDK_CONFIG_ISAL 1 00:11:44.935 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:44.935 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:44.935 #define SPDK_CONFIG_LIBDIR 00:11:44.935 #undef SPDK_CONFIG_LTO 00:11:44.935 #define SPDK_CONFIG_MAX_LCORES 128 00:11:44.935 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:44.935 #define SPDK_CONFIG_NVME_CUSE 1 00:11:44.935 #undef SPDK_CONFIG_OCF 00:11:44.935 #define SPDK_CONFIG_OCF_PATH 00:11:44.935 #define SPDK_CONFIG_OPENSSL_PATH 00:11:44.935 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:44.935 #define SPDK_CONFIG_PGO_DIR 00:11:44.935 #undef SPDK_CONFIG_PGO_USE 00:11:44.935 #define SPDK_CONFIG_PREFIX /usr/local 00:11:44.935 #undef SPDK_CONFIG_RAID5F 00:11:44.935 #undef SPDK_CONFIG_RBD 00:11:44.935 #define SPDK_CONFIG_RDMA 1 00:11:44.935 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:44.935 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:44.935 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:44.935 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:44.935 #define SPDK_CONFIG_SHARED 1 00:11:44.935 #undef SPDK_CONFIG_SMA 00:11:44.935 #define SPDK_CONFIG_TESTS 1 00:11:44.935 #undef SPDK_CONFIG_TSAN 00:11:44.935 #define SPDK_CONFIG_UBLK 1 00:11:44.935 #define SPDK_CONFIG_UBSAN 1 00:11:44.935 #undef SPDK_CONFIG_UNIT_TESTS 00:11:44.935 #undef SPDK_CONFIG_URING 00:11:44.935 #define SPDK_CONFIG_URING_PATH 00:11:44.935 #undef SPDK_CONFIG_URING_ZNS 00:11:44.935 #undef SPDK_CONFIG_USDT 00:11:44.935 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:44.935 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:44.935 #undef SPDK_CONFIG_VFIO_USER 00:11:44.935 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:44.935 #define SPDK_CONFIG_VHOST 1 00:11:44.935 #define SPDK_CONFIG_VIRTIO 1 00:11:44.935 #undef SPDK_CONFIG_VTUNE 00:11:44.935 #define SPDK_CONFIG_VTUNE_DIR 00:11:44.935 #define SPDK_CONFIG_WERROR 1 00:11:44.935 #define SPDK_CONFIG_WPDK_DIR 00:11:44.935 #undef SPDK_CONFIG_XNVME 00:11:44.935 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.935 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:44.936 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v22.11.4 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:11:44.937 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=rdma 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 207608 ]] 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 207608 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.lsOfFF 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.lsOfFF/tests/target /tmp/spdk.lsOfFF 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=422735872 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=4861693952 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=55164043264 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61730615296 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6566572032 00:11:44.938 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30851846144 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865305600 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=13459456 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12323045376 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12346126336 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23080960 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30865129472 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865309696 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=180224 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6173048832 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6173061120 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:44.939 * Looking for test storage... 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=55164043264 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8781164544 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:44.939 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:44.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.939 --rc genhtml_branch_coverage=1 00:11:44.939 --rc genhtml_function_coverage=1 00:11:44.939 --rc genhtml_legend=1 00:11:44.939 --rc geninfo_all_blocks=1 00:11:44.939 --rc geninfo_unexecuted_blocks=1 00:11:44.939 00:11:44.939 ' 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:44.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.939 --rc genhtml_branch_coverage=1 00:11:44.939 --rc genhtml_function_coverage=1 00:11:44.939 --rc genhtml_legend=1 00:11:44.939 --rc geninfo_all_blocks=1 00:11:44.939 --rc geninfo_unexecuted_blocks=1 00:11:44.939 00:11:44.939 ' 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:44.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.939 --rc genhtml_branch_coverage=1 00:11:44.939 --rc genhtml_function_coverage=1 00:11:44.939 --rc genhtml_legend=1 00:11:44.939 --rc geninfo_all_blocks=1 00:11:44.939 --rc geninfo_unexecuted_blocks=1 00:11:44.939 00:11:44.939 ' 00:11:44.939 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:44.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.939 --rc genhtml_branch_coverage=1 00:11:44.939 --rc genhtml_function_coverage=1 00:11:44.939 --rc genhtml_legend=1 00:11:44.939 --rc geninfo_all_blocks=1 00:11:44.940 --rc geninfo_unexecuted_blocks=1 00:11:44.940 00:11:44.940 ' 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:44.940 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.940 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.201 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:45.201 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:45.201 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:45.201 19:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:53.342 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.342 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:53.342 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:53.342 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:53.342 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:53.342 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:53.342 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:53.342 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:53.342 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:53.342 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:53.342 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:53.342 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:53.342 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:53.342 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:53.342 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:53.342 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.342 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:53.343 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:53.343 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:53.343 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:53.343 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # rdma_device_init 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:53.343 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:53.343 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:53.343 altname enp217s0f0np0 00:11:53.343 altname ens818f0np0 00:11:53.343 inet 192.168.100.8/24 scope global mlx_0_0 00:11:53.343 valid_lft forever preferred_lft forever 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:53.343 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:53.343 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:53.343 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:53.343 altname enp217s0f1np1 00:11:53.343 altname ens818f1np1 00:11:53.344 inet 192.168.100.9/24 scope global mlx_0_1 00:11:53.344 valid_lft forever preferred_lft forever 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:53.344 192.168.100.9' 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:53.344 192.168.100.9' 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # head -n 1 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:53.344 192.168.100.9' 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # tail -n +2 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # head -n 1 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:53.344 ************************************ 00:11:53.344 START TEST nvmf_filesystem_no_in_capsule 00:11:53.344 ************************************ 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=211042 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 211042 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 211042 ']' 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.344 [2024-12-13 19:06:26.749274] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:53.344 [2024-12-13 19:06:26.749326] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.344 [2024-12-13 19:06:26.842819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.344 [2024-12-13 19:06:26.865968] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.344 [2024-12-13 19:06:26.866006] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.344 [2024-12-13 19:06:26.866016] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.344 [2024-12-13 19:06:26.866024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.344 [2024-12-13 19:06:26.866031] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.344 [2024-12-13 19:06:26.867684] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.344 [2024-12-13 19:06:26.867793] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.344 [2024-12-13 19:06:26.867900] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.344 [2024-12-13 19:06:26.867901] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:53.344 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.344 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.344 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:53.344 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:11:53.344 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.344 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.344 [2024-12-13 19:06:27.017137] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:11:53.344 [2024-12-13 19:06:27.038637] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1aee540/0x1af29f0) succeed. 00:11:53.344 [2024-12-13 19:06:27.047807] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1aefb80/0x1b34090) succeed. 00:11:53.344 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.344 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:53.344 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.344 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.344 Malloc1 00:11:53.344 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.345 [2024-12-13 19:06:27.305525] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:53.345 { 00:11:53.345 "name": "Malloc1", 00:11:53.345 "aliases": [ 00:11:53.345 "f35ad78f-65b9-4801-a4ea-2be0ae4326c1" 00:11:53.345 ], 00:11:53.345 "product_name": "Malloc disk", 00:11:53.345 "block_size": 512, 00:11:53.345 "num_blocks": 1048576, 00:11:53.345 "uuid": "f35ad78f-65b9-4801-a4ea-2be0ae4326c1", 00:11:53.345 "assigned_rate_limits": { 00:11:53.345 "rw_ios_per_sec": 0, 00:11:53.345 "rw_mbytes_per_sec": 0, 00:11:53.345 "r_mbytes_per_sec": 0, 00:11:53.345 "w_mbytes_per_sec": 0 00:11:53.345 }, 00:11:53.345 "claimed": true, 00:11:53.345 "claim_type": "exclusive_write", 00:11:53.345 "zoned": false, 00:11:53.345 "supported_io_types": { 00:11:53.345 "read": true, 00:11:53.345 "write": true, 00:11:53.345 "unmap": true, 00:11:53.345 "flush": true, 00:11:53.345 "reset": true, 00:11:53.345 "nvme_admin": false, 00:11:53.345 "nvme_io": false, 00:11:53.345 "nvme_io_md": false, 00:11:53.345 "write_zeroes": true, 00:11:53.345 "zcopy": true, 00:11:53.345 "get_zone_info": false, 00:11:53.345 "zone_management": false, 00:11:53.345 "zone_append": false, 00:11:53.345 "compare": false, 00:11:53.345 "compare_and_write": false, 00:11:53.345 "abort": true, 00:11:53.345 "seek_hole": false, 00:11:53.345 "seek_data": false, 00:11:53.345 "copy": true, 00:11:53.345 "nvme_iov_md": false 00:11:53.345 }, 00:11:53.345 "memory_domains": [ 00:11:53.345 { 00:11:53.345 "dma_device_id": "system", 00:11:53.345 "dma_device_type": 1 00:11:53.345 }, 00:11:53.345 { 00:11:53.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.345 "dma_device_type": 2 00:11:53.345 } 00:11:53.345 ], 00:11:53.345 "driver_specific": {} 00:11:53.345 } 00:11:53.345 ]' 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:53.345 19:06:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:54.286 19:06:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.286 19:06:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:54.286 19:06:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.286 19:06:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:54.286 19:06:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:56.196 19:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:56.196 19:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:56.196 19:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.196 19:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:56.196 19:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.196 19:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:56.196 19:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:56.196 19:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:56.196 19:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:56.196 19:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:56.196 19:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:56.196 19:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:56.196 19:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:56.196 19:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:56.196 19:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:56.196 19:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:56.196 19:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:56.196 19:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:56.457 19:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:57.398 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:57.398 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:57.398 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:57.398 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.398 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.398 ************************************ 00:11:57.398 START TEST filesystem_ext4 00:11:57.398 ************************************ 00:11:57.398 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:57.398 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:57.398 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:57.398 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:57.398 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:57.398 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:57.398 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:57.398 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:57.398 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:57.398 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:57.398 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:57.398 mke2fs 1.47.0 (5-Feb-2023) 00:11:57.659 Discarding device blocks: 0/522240 done 00:11:57.659 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:57.659 Filesystem UUID: a16d96ea-f846-4a63-8097-c79d586c918d 00:11:57.659 Superblock backups stored on blocks: 00:11:57.659 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:57.659 00:11:57.659 Allocating group tables: 0/64 done 00:11:57.659 Writing inode tables: 0/64 done 00:11:57.659 Creating journal (8192 blocks): done 00:11:57.659 Writing superblocks and filesystem accounting information: 0/64 done 00:11:57.659 00:11:57.659 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:57.659 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:57.659 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:57.659 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:57.659 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:57.659 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:57.659 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:57.659 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:57.659 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 211042 00:11:57.659 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:57.659 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:57.659 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:57.659 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:57.659 00:11:57.659 real 0m0.236s 00:11:57.659 user 0m0.034s 00:11:57.659 sys 0m0.106s 00:11:57.659 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.659 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:57.659 ************************************ 00:11:57.659 END TEST filesystem_ext4 00:11:57.659 ************************************ 00:11:57.659 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:57.659 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:57.659 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.660 19:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.660 ************************************ 00:11:57.660 START TEST filesystem_btrfs 00:11:57.660 ************************************ 00:11:57.660 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:57.660 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:57.660 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:57.660 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:57.660 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:57.660 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:57.660 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:57.660 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:57.660 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:57.660 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:57.660 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:57.920 btrfs-progs v6.8.1 00:11:57.920 See https://btrfs.readthedocs.io for more information. 00:11:57.920 00:11:57.920 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:57.920 NOTE: several default settings have changed in version 5.15, please make sure 00:11:57.920 this does not affect your deployments: 00:11:57.920 - DUP for metadata (-m dup) 00:11:57.920 - enabled no-holes (-O no-holes) 00:11:57.920 - enabled free-space-tree (-R free-space-tree) 00:11:57.920 00:11:57.920 Label: (null) 00:11:57.920 UUID: 6af41194-be9e-45c5-b31e-97338636388e 00:11:57.920 Node size: 16384 00:11:57.920 Sector size: 4096 (CPU page size: 4096) 00:11:57.920 Filesystem size: 510.00MiB 00:11:57.920 Block group profiles: 00:11:57.920 Data: single 8.00MiB 00:11:57.920 Metadata: DUP 32.00MiB 00:11:57.920 System: DUP 8.00MiB 00:11:57.920 SSD detected: yes 00:11:57.920 Zoned device: no 00:11:57.920 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:57.920 Checksum: crc32c 00:11:57.920 Number of devices: 1 00:11:57.920 Devices: 00:11:57.920 ID SIZE PATH 00:11:57.920 1 510.00MiB /dev/nvme0n1p1 00:11:57.920 00:11:57.920 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:57.920 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:57.920 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:57.920 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:57.920 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:57.920 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:57.920 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:57.920 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:57.920 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 211042 00:11:57.920 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:57.920 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:57.921 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:57.921 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:57.921 00:11:57.921 real 0m0.288s 00:11:57.921 user 0m0.037s 00:11:57.921 sys 0m0.162s 00:11:57.921 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.921 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:57.921 ************************************ 00:11:57.921 END TEST filesystem_btrfs 00:11:57.921 ************************************ 00:11:58.181 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:58.181 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:58.181 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.181 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.181 ************************************ 00:11:58.181 START TEST filesystem_xfs 00:11:58.181 ************************************ 00:11:58.181 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:58.181 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:58.181 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:58.181 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:58.181 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:58.181 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:58.181 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:58.181 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:58.181 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:58.181 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:58.181 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:58.181 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:58.181 = sectsz=512 attr=2, projid32bit=1 00:11:58.181 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:58.181 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:58.181 data = bsize=4096 blocks=130560, imaxpct=25 00:11:58.181 = sunit=0 swidth=0 blks 00:11:58.181 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:58.181 log =internal log bsize=4096 blocks=16384, version=2 00:11:58.181 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:58.181 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:58.181 Discarding blocks...Done. 00:11:58.181 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:58.181 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:58.752 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:58.752 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:58.752 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:58.752 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:58.752 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:58.752 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:58.752 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 211042 00:11:58.752 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:58.752 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:58.752 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:58.752 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:58.752 00:11:58.752 real 0m0.678s 00:11:58.752 user 0m0.024s 00:11:58.752 sys 0m0.114s 00:11:58.752 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.752 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:58.752 ************************************ 00:11:58.752 END TEST filesystem_xfs 00:11:58.752 ************************************ 00:11:58.752 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:59.012 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:59.012 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:59.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.952 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:59.952 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:59.952 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:59.952 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.952 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:59.952 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.952 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:59.952 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.952 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.952 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.952 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.952 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:59.952 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 211042 00:11:59.952 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 211042 ']' 00:11:59.952 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 211042 00:11:59.952 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:59.952 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.952 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 211042 00:11:59.952 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:59.952 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:59.952 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 211042' 00:11:59.952 killing process with pid 211042 00:11:59.952 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 211042 00:11:59.952 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 211042 00:12:00.524 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:00.524 00:12:00.524 real 0m7.906s 00:12:00.524 user 0m30.889s 00:12:00.524 sys 0m1.321s 00:12:00.524 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.524 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.524 ************************************ 00:12:00.524 END TEST nvmf_filesystem_no_in_capsule 00:12:00.524 ************************************ 00:12:00.524 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:00.524 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:00.524 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.524 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:00.524 ************************************ 00:12:00.524 START TEST nvmf_filesystem_in_capsule 00:12:00.524 ************************************ 00:12:00.524 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:00.524 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:00.524 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:00.524 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:00.524 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:00.524 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.524 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=212690 00:12:00.524 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 212690 00:12:00.524 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:00.524 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 212690 ']' 00:12:00.524 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.524 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.524 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.524 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.524 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.524 [2024-12-13 19:06:34.737393] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:00.524 [2024-12-13 19:06:34.737445] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.524 [2024-12-13 19:06:34.828670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.524 [2024-12-13 19:06:34.851146] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.524 [2024-12-13 19:06:34.851181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.524 [2024-12-13 19:06:34.851191] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.524 [2024-12-13 19:06:34.851200] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.524 [2024-12-13 19:06:34.851207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.524 [2024-12-13 19:06:34.852969] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.524 [2024-12-13 19:06:34.853006] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.524 [2024-12-13 19:06:34.853113] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.524 [2024-12-13 19:06:34.853114] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.785 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.785 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:00.785 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:00.785 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:00.785 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.785 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.785 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:00.785 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:12:00.785 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.785 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.785 [2024-12-13 19:06:35.023102] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10a0540/0x10a49f0) succeed. 00:12:00.785 [2024-12-13 19:06:35.032199] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10a1b80/0x10e6090) succeed. 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.045 Malloc1 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.045 [2024-12-13 19:06:35.318405] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.045 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:01.045 { 00:12:01.045 "name": "Malloc1", 00:12:01.045 "aliases": [ 00:12:01.045 "e90c7626-3bf0-43f0-9385-1357f2a62b06" 00:12:01.045 ], 00:12:01.045 "product_name": "Malloc disk", 00:12:01.045 "block_size": 512, 00:12:01.045 "num_blocks": 1048576, 00:12:01.045 "uuid": "e90c7626-3bf0-43f0-9385-1357f2a62b06", 00:12:01.045 "assigned_rate_limits": { 00:12:01.045 "rw_ios_per_sec": 0, 00:12:01.045 "rw_mbytes_per_sec": 0, 00:12:01.045 "r_mbytes_per_sec": 0, 00:12:01.045 "w_mbytes_per_sec": 0 00:12:01.045 }, 00:12:01.045 "claimed": true, 00:12:01.045 "claim_type": "exclusive_write", 00:12:01.045 "zoned": false, 00:12:01.045 "supported_io_types": { 00:12:01.045 "read": true, 00:12:01.045 "write": true, 00:12:01.045 "unmap": true, 00:12:01.045 "flush": true, 00:12:01.045 "reset": true, 00:12:01.045 "nvme_admin": false, 00:12:01.045 "nvme_io": false, 00:12:01.045 "nvme_io_md": false, 00:12:01.045 "write_zeroes": true, 00:12:01.045 "zcopy": true, 00:12:01.045 "get_zone_info": false, 00:12:01.045 "zone_management": false, 00:12:01.045 "zone_append": false, 00:12:01.045 "compare": false, 00:12:01.045 "compare_and_write": false, 00:12:01.045 "abort": true, 00:12:01.045 "seek_hole": false, 00:12:01.046 "seek_data": false, 00:12:01.046 "copy": true, 00:12:01.046 "nvme_iov_md": false 00:12:01.046 }, 00:12:01.046 "memory_domains": [ 00:12:01.046 { 00:12:01.046 "dma_device_id": "system", 00:12:01.046 "dma_device_type": 1 00:12:01.046 }, 00:12:01.046 { 00:12:01.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.046 "dma_device_type": 2 00:12:01.046 } 00:12:01.046 ], 00:12:01.046 "driver_specific": {} 00:12:01.046 } 00:12:01.046 ]' 00:12:01.046 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:01.046 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:01.046 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:01.305 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:01.305 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:01.305 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:01.305 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:01.306 19:06:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:02.246 19:06:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:02.246 19:06:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:02.246 19:06:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:02.246 19:06:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:02.246 19:06:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:04.161 19:06:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:04.161 19:06:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:04.161 19:06:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:04.161 19:06:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:04.161 19:06:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.161 19:06:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:04.161 19:06:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:04.161 19:06:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:04.161 19:06:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:04.161 19:06:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:04.161 19:06:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:04.161 19:06:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:04.161 19:06:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:04.161 19:06:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:04.161 19:06:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:04.161 19:06:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:04.161 19:06:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:04.161 19:06:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:04.424 19:06:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:05.365 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:05.365 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:05.365 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:05.365 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.365 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.365 ************************************ 00:12:05.365 START TEST filesystem_in_capsule_ext4 00:12:05.365 ************************************ 00:12:05.365 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:05.365 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:05.366 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:05.366 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:05.366 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:05.366 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:05.366 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:05.366 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:05.366 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:05.366 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:05.366 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:05.366 mke2fs 1.47.0 (5-Feb-2023) 00:12:05.366 Discarding device blocks: 0/522240 done 00:12:05.366 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:05.366 Filesystem UUID: ce8c96fb-91a6-4418-9058-449c6f4244fb 00:12:05.366 Superblock backups stored on blocks: 00:12:05.366 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:05.366 00:12:05.366 Allocating group tables: 0/64 done 00:12:05.366 Writing inode tables: 0/64 done 00:12:05.366 Creating journal (8192 blocks): done 00:12:05.626 Writing superblocks and filesystem accounting information: 0/64 done 00:12:05.626 00:12:05.626 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:05.626 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:05.626 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:05.626 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:05.626 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:05.626 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:05.626 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:05.626 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:05.626 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 212690 00:12:05.627 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:05.627 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:05.627 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:05.627 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:05.627 00:12:05.627 real 0m0.200s 00:12:05.627 user 0m0.033s 00:12:05.627 sys 0m0.077s 00:12:05.627 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.627 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:05.627 ************************************ 00:12:05.627 END TEST filesystem_in_capsule_ext4 00:12:05.627 ************************************ 00:12:05.627 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:05.627 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:05.627 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.627 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.627 ************************************ 00:12:05.627 START TEST filesystem_in_capsule_btrfs 00:12:05.627 ************************************ 00:12:05.627 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:05.627 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:05.627 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:05.627 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:05.627 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:05.627 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:05.627 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:05.627 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:05.627 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:05.627 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:05.627 19:06:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:05.888 btrfs-progs v6.8.1 00:12:05.888 See https://btrfs.readthedocs.io for more information. 00:12:05.888 00:12:05.888 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:05.888 NOTE: several default settings have changed in version 5.15, please make sure 00:12:05.888 this does not affect your deployments: 00:12:05.888 - DUP for metadata (-m dup) 00:12:05.888 - enabled no-holes (-O no-holes) 00:12:05.888 - enabled free-space-tree (-R free-space-tree) 00:12:05.888 00:12:05.888 Label: (null) 00:12:05.888 UUID: f3904486-30e0-4ed4-a674-07a3b49d22f4 00:12:05.888 Node size: 16384 00:12:05.888 Sector size: 4096 (CPU page size: 4096) 00:12:05.888 Filesystem size: 510.00MiB 00:12:05.888 Block group profiles: 00:12:05.888 Data: single 8.00MiB 00:12:05.888 Metadata: DUP 32.00MiB 00:12:05.888 System: DUP 8.00MiB 00:12:05.888 SSD detected: yes 00:12:05.888 Zoned device: no 00:12:05.888 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:05.888 Checksum: crc32c 00:12:05.888 Number of devices: 1 00:12:05.888 Devices: 00:12:05.888 ID SIZE PATH 00:12:05.888 1 510.00MiB /dev/nvme0n1p1 00:12:05.888 00:12:05.888 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:05.888 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:05.888 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:05.888 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:05.888 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:05.888 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:05.888 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:05.888 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:05.888 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 212690 00:12:05.888 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:05.888 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:05.888 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:05.888 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:05.888 00:12:05.888 real 0m0.249s 00:12:05.888 user 0m0.032s 00:12:05.888 sys 0m0.128s 00:12:05.888 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.888 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:05.888 ************************************ 00:12:05.888 END TEST filesystem_in_capsule_btrfs 00:12:05.888 ************************************ 00:12:05.888 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:05.888 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:05.888 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.888 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.149 ************************************ 00:12:06.149 START TEST filesystem_in_capsule_xfs 00:12:06.149 ************************************ 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:06.149 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:06.149 = sectsz=512 attr=2, projid32bit=1 00:12:06.149 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:06.149 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:06.149 data = bsize=4096 blocks=130560, imaxpct=25 00:12:06.149 = sunit=0 swidth=0 blks 00:12:06.149 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:06.149 log =internal log bsize=4096 blocks=16384, version=2 00:12:06.149 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:06.149 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:06.149 Discarding blocks...Done. 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 212690 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:06.149 00:12:06.149 real 0m0.204s 00:12:06.149 user 0m0.029s 00:12:06.149 sys 0m0.078s 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:06.149 ************************************ 00:12:06.149 END TEST filesystem_in_capsule_xfs 00:12:06.149 ************************************ 00:12:06.149 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:06.410 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:06.410 19:06:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.351 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:07.351 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:07.351 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:07.351 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.351 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:07.351 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.351 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:07.351 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.351 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.351 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.351 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.351 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:07.351 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 212690 00:12:07.351 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 212690 ']' 00:12:07.351 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 212690 00:12:07.351 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:07.351 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:07.351 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 212690 00:12:07.351 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:07.351 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:07.351 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 212690' 00:12:07.351 killing process with pid 212690 00:12:07.351 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 212690 00:12:07.351 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 212690 00:12:07.612 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:07.612 00:12:07.612 real 0m7.301s 00:12:07.612 user 0m28.428s 00:12:07.612 sys 0m1.203s 00:12:07.612 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.612 19:06:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.612 ************************************ 00:12:07.612 END TEST nvmf_filesystem_in_capsule 00:12:07.612 ************************************ 00:12:07.872 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:07.872 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:07.872 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:07.872 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:07.872 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:07.872 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:07.872 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:07.872 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:07.872 rmmod nvme_rdma 00:12:07.872 rmmod nvme_fabrics 00:12:07.872 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:07.872 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:07.872 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:07.873 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:07.873 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:07.873 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:07.873 00:12:07.873 real 0m23.294s 00:12:07.873 user 1m1.740s 00:12:07.873 sys 0m8.443s 00:12:07.873 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.873 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:07.873 ************************************ 00:12:07.873 END TEST nvmf_filesystem 00:12:07.873 ************************************ 00:12:07.873 19:06:42 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:12:07.873 19:06:42 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:07.873 19:06:42 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.873 19:06:42 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:07.873 ************************************ 00:12:07.873 START TEST nvmf_target_discovery 00:12:07.873 ************************************ 00:12:07.873 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:12:08.134 * Looking for test storage... 00:12:08.134 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:08.134 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:08.134 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:12:08.134 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:08.134 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:08.134 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.134 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.134 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.134 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.134 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.134 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.134 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.134 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.134 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.134 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.134 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.134 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:08.134 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:08.134 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.134 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.134 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:08.134 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:08.134 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:08.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.135 --rc genhtml_branch_coverage=1 00:12:08.135 --rc genhtml_function_coverage=1 00:12:08.135 --rc genhtml_legend=1 00:12:08.135 --rc geninfo_all_blocks=1 00:12:08.135 --rc geninfo_unexecuted_blocks=1 00:12:08.135 00:12:08.135 ' 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:08.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.135 --rc genhtml_branch_coverage=1 00:12:08.135 --rc genhtml_function_coverage=1 00:12:08.135 --rc genhtml_legend=1 00:12:08.135 --rc geninfo_all_blocks=1 00:12:08.135 --rc geninfo_unexecuted_blocks=1 00:12:08.135 00:12:08.135 ' 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:08.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.135 --rc genhtml_branch_coverage=1 00:12:08.135 --rc genhtml_function_coverage=1 00:12:08.135 --rc genhtml_legend=1 00:12:08.135 --rc geninfo_all_blocks=1 00:12:08.135 --rc geninfo_unexecuted_blocks=1 00:12:08.135 00:12:08.135 ' 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:08.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.135 --rc genhtml_branch_coverage=1 00:12:08.135 --rc genhtml_function_coverage=1 00:12:08.135 --rc genhtml_legend=1 00:12:08.135 --rc geninfo_all_blocks=1 00:12:08.135 --rc geninfo_unexecuted_blocks=1 00:12:08.135 00:12:08.135 ' 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:08.135 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:08.135 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:16.275 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:16.275 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:16.275 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:16.276 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:16.276 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # rdma_device_init 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:16.276 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:16.276 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:16.276 altname enp217s0f0np0 00:12:16.276 altname ens818f0np0 00:12:16.276 inet 192.168.100.8/24 scope global mlx_0_0 00:12:16.276 valid_lft forever preferred_lft forever 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:16.276 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:16.276 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:16.276 altname enp217s0f1np1 00:12:16.276 altname ens818f1np1 00:12:16.276 inet 192.168.100.9/24 scope global mlx_0_1 00:12:16.276 valid_lft forever preferred_lft forever 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:12:16.276 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:16.277 192.168.100.9' 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:16.277 192.168.100.9' 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # head -n 1 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:16.277 192.168.100.9' 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # tail -n +2 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # head -n 1 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=217560 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 217560 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 217560 ']' 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.277 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.277 [2024-12-13 19:06:49.829988] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:16.277 [2024-12-13 19:06:49.830037] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.277 [2024-12-13 19:06:49.923122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.277 [2024-12-13 19:06:49.945367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.277 [2024-12-13 19:06:49.945406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.277 [2024-12-13 19:06:49.945416] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.277 [2024-12-13 19:06:49.945424] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.277 [2024-12-13 19:06:49.945431] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.277 [2024-12-13 19:06:49.947190] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.277 [2024-12-13 19:06:49.947299] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.277 [2024-12-13 19:06:49.947409] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.277 [2024-12-13 19:06:49.947410] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.277 [2024-12-13 19:06:50.113628] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a86540/0x1a8a9f0) succeed. 00:12:16.277 [2024-12-13 19:06:50.122832] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a87b80/0x1acc090) succeed. 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.277 Null1 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.277 [2024-12-13 19:06:50.320474] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.277 Null2 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.277 Null3 00:12:16.277 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.278 Null4 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:12:16.278 00:12:16.278 Discovery Log Number of Records 6, Generation counter 6 00:12:16.278 =====Discovery Log Entry 0====== 00:12:16.278 trtype: rdma 00:12:16.278 adrfam: ipv4 00:12:16.278 subtype: current discovery subsystem 00:12:16.278 treq: not required 00:12:16.278 portid: 0 00:12:16.278 trsvcid: 4420 00:12:16.278 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:16.278 traddr: 192.168.100.8 00:12:16.278 eflags: explicit discovery connections, duplicate discovery information 00:12:16.278 rdma_prtype: not specified 00:12:16.278 rdma_qptype: connected 00:12:16.278 rdma_cms: rdma-cm 00:12:16.278 rdma_pkey: 0x0000 00:12:16.278 =====Discovery Log Entry 1====== 00:12:16.278 trtype: rdma 00:12:16.278 adrfam: ipv4 00:12:16.278 subtype: nvme subsystem 00:12:16.278 treq: not required 00:12:16.278 portid: 0 00:12:16.278 trsvcid: 4420 00:12:16.278 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:16.278 traddr: 192.168.100.8 00:12:16.278 eflags: none 00:12:16.278 rdma_prtype: not specified 00:12:16.278 rdma_qptype: connected 00:12:16.278 rdma_cms: rdma-cm 00:12:16.278 rdma_pkey: 0x0000 00:12:16.278 =====Discovery Log Entry 2====== 00:12:16.278 trtype: rdma 00:12:16.278 adrfam: ipv4 00:12:16.278 subtype: nvme subsystem 00:12:16.278 treq: not required 00:12:16.278 portid: 0 00:12:16.278 trsvcid: 4420 00:12:16.278 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:16.278 traddr: 192.168.100.8 00:12:16.278 eflags: none 00:12:16.278 rdma_prtype: not specified 00:12:16.278 rdma_qptype: connected 00:12:16.278 rdma_cms: rdma-cm 00:12:16.278 rdma_pkey: 0x0000 00:12:16.278 =====Discovery Log Entry 3====== 00:12:16.278 trtype: rdma 00:12:16.278 adrfam: ipv4 00:12:16.278 subtype: nvme subsystem 00:12:16.278 treq: not required 00:12:16.278 portid: 0 00:12:16.278 trsvcid: 4420 00:12:16.278 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:16.278 traddr: 192.168.100.8 00:12:16.278 eflags: none 00:12:16.278 rdma_prtype: not specified 00:12:16.278 rdma_qptype: connected 00:12:16.278 rdma_cms: rdma-cm 00:12:16.278 rdma_pkey: 0x0000 00:12:16.278 =====Discovery Log Entry 4====== 00:12:16.278 trtype: rdma 00:12:16.278 adrfam: ipv4 00:12:16.278 subtype: nvme subsystem 00:12:16.278 treq: not required 00:12:16.278 portid: 0 00:12:16.278 trsvcid: 4420 00:12:16.278 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:16.278 traddr: 192.168.100.8 00:12:16.278 eflags: none 00:12:16.278 rdma_prtype: not specified 00:12:16.278 rdma_qptype: connected 00:12:16.278 rdma_cms: rdma-cm 00:12:16.278 rdma_pkey: 0x0000 00:12:16.278 =====Discovery Log Entry 5====== 00:12:16.278 trtype: rdma 00:12:16.278 adrfam: ipv4 00:12:16.278 subtype: discovery subsystem referral 00:12:16.278 treq: not required 00:12:16.278 portid: 0 00:12:16.278 trsvcid: 4430 00:12:16.278 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:16.278 traddr: 192.168.100.8 00:12:16.278 eflags: none 00:12:16.278 rdma_prtype: unrecognized 00:12:16.278 rdma_qptype: unrecognized 00:12:16.278 rdma_cms: unrecognized 00:12:16.278 rdma_pkey: 0x0000 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:16.278 Perform nvmf subsystem discovery via RPC 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.278 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.278 [ 00:12:16.278 { 00:12:16.278 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:16.278 "subtype": "Discovery", 00:12:16.278 "listen_addresses": [ 00:12:16.278 { 00:12:16.278 "trtype": "RDMA", 00:12:16.278 "adrfam": "IPv4", 00:12:16.278 "traddr": "192.168.100.8", 00:12:16.278 "trsvcid": "4420" 00:12:16.278 } 00:12:16.278 ], 00:12:16.278 "allow_any_host": true, 00:12:16.278 "hosts": [] 00:12:16.278 }, 00:12:16.278 { 00:12:16.278 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:16.278 "subtype": "NVMe", 00:12:16.278 "listen_addresses": [ 00:12:16.278 { 00:12:16.278 "trtype": "RDMA", 00:12:16.278 "adrfam": "IPv4", 00:12:16.278 "traddr": "192.168.100.8", 00:12:16.278 "trsvcid": "4420" 00:12:16.278 } 00:12:16.278 ], 00:12:16.278 "allow_any_host": true, 00:12:16.278 "hosts": [], 00:12:16.278 "serial_number": "SPDK00000000000001", 00:12:16.278 "model_number": "SPDK bdev Controller", 00:12:16.278 "max_namespaces": 32, 00:12:16.278 "min_cntlid": 1, 00:12:16.278 "max_cntlid": 65519, 00:12:16.278 "namespaces": [ 00:12:16.278 { 00:12:16.278 "nsid": 1, 00:12:16.278 "bdev_name": "Null1", 00:12:16.278 "name": "Null1", 00:12:16.278 "nguid": "123FAB8F43F84C8995BF2D80AA8A9A18", 00:12:16.278 "uuid": "123fab8f-43f8-4c89-95bf-2d80aa8a9a18" 00:12:16.278 } 00:12:16.278 ] 00:12:16.278 }, 00:12:16.278 { 00:12:16.278 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:16.278 "subtype": "NVMe", 00:12:16.278 "listen_addresses": [ 00:12:16.278 { 00:12:16.278 "trtype": "RDMA", 00:12:16.278 "adrfam": "IPv4", 00:12:16.278 "traddr": "192.168.100.8", 00:12:16.278 "trsvcid": "4420" 00:12:16.278 } 00:12:16.278 ], 00:12:16.278 "allow_any_host": true, 00:12:16.278 "hosts": [], 00:12:16.279 "serial_number": "SPDK00000000000002", 00:12:16.279 "model_number": "SPDK bdev Controller", 00:12:16.279 "max_namespaces": 32, 00:12:16.279 "min_cntlid": 1, 00:12:16.279 "max_cntlid": 65519, 00:12:16.279 "namespaces": [ 00:12:16.279 { 00:12:16.279 "nsid": 1, 00:12:16.279 "bdev_name": "Null2", 00:12:16.279 "name": "Null2", 00:12:16.279 "nguid": "DB6EF38AFB954FB5AAD004D838BA15C5", 00:12:16.279 "uuid": "db6ef38a-fb95-4fb5-aad0-04d838ba15c5" 00:12:16.279 } 00:12:16.279 ] 00:12:16.279 }, 00:12:16.279 { 00:12:16.279 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:16.279 "subtype": "NVMe", 00:12:16.279 "listen_addresses": [ 00:12:16.279 { 00:12:16.279 "trtype": "RDMA", 00:12:16.279 "adrfam": "IPv4", 00:12:16.279 "traddr": "192.168.100.8", 00:12:16.279 "trsvcid": "4420" 00:12:16.279 } 00:12:16.279 ], 00:12:16.279 "allow_any_host": true, 00:12:16.279 "hosts": [], 00:12:16.279 "serial_number": "SPDK00000000000003", 00:12:16.279 "model_number": "SPDK bdev Controller", 00:12:16.279 "max_namespaces": 32, 00:12:16.279 "min_cntlid": 1, 00:12:16.279 "max_cntlid": 65519, 00:12:16.279 "namespaces": [ 00:12:16.279 { 00:12:16.279 "nsid": 1, 00:12:16.279 "bdev_name": "Null3", 00:12:16.279 "name": "Null3", 00:12:16.279 "nguid": "962F102FACE0442495398E5A7C011969", 00:12:16.279 "uuid": "962f102f-ace0-4424-9539-8e5a7c011969" 00:12:16.279 } 00:12:16.279 ] 00:12:16.279 }, 00:12:16.279 { 00:12:16.279 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:16.279 "subtype": "NVMe", 00:12:16.279 "listen_addresses": [ 00:12:16.279 { 00:12:16.279 "trtype": "RDMA", 00:12:16.279 "adrfam": "IPv4", 00:12:16.279 "traddr": "192.168.100.8", 00:12:16.279 "trsvcid": "4420" 00:12:16.279 } 00:12:16.279 ], 00:12:16.279 "allow_any_host": true, 00:12:16.279 "hosts": [], 00:12:16.279 "serial_number": "SPDK00000000000004", 00:12:16.279 "model_number": "SPDK bdev Controller", 00:12:16.279 "max_namespaces": 32, 00:12:16.279 "min_cntlid": 1, 00:12:16.279 "max_cntlid": 65519, 00:12:16.279 "namespaces": [ 00:12:16.279 { 00:12:16.279 "nsid": 1, 00:12:16.279 "bdev_name": "Null4", 00:12:16.279 "name": "Null4", 00:12:16.279 "nguid": "A38F5D1B7230473092A53C62EE554AF5", 00:12:16.279 "uuid": "a38f5d1b-7230-4730-92a5-3c62ee554af5" 00:12:16.279 } 00:12:16.279 ] 00:12:16.279 } 00:12:16.279 ] 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.279 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:16.540 rmmod nvme_rdma 00:12:16.540 rmmod nvme_fabrics 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 217560 ']' 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 217560 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 217560 ']' 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 217560 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 217560 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 217560' 00:12:16.540 killing process with pid 217560 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 217560 00:12:16.540 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 217560 00:12:16.801 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:16.801 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:16.801 00:12:16.801 real 0m8.898s 00:12:16.801 user 0m6.541s 00:12:16.801 sys 0m6.134s 00:12:16.801 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.801 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.801 ************************************ 00:12:16.801 END TEST nvmf_target_discovery 00:12:16.801 ************************************ 00:12:16.801 19:06:51 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:12:16.801 19:06:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:16.801 19:06:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.801 19:06:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:16.801 ************************************ 00:12:16.801 START TEST nvmf_referrals 00:12:16.801 ************************************ 00:12:16.801 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:12:17.062 * Looking for test storage... 00:12:17.062 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:17.062 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:17.062 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:17.062 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:17.062 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:17.062 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.062 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.062 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.062 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.062 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.062 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.062 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.062 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.062 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.062 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.062 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.062 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:17.062 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:17.062 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.062 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.062 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:17.062 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:17.062 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.062 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:17.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.063 --rc genhtml_branch_coverage=1 00:12:17.063 --rc genhtml_function_coverage=1 00:12:17.063 --rc genhtml_legend=1 00:12:17.063 --rc geninfo_all_blocks=1 00:12:17.063 --rc geninfo_unexecuted_blocks=1 00:12:17.063 00:12:17.063 ' 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:17.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.063 --rc genhtml_branch_coverage=1 00:12:17.063 --rc genhtml_function_coverage=1 00:12:17.063 --rc genhtml_legend=1 00:12:17.063 --rc geninfo_all_blocks=1 00:12:17.063 --rc geninfo_unexecuted_blocks=1 00:12:17.063 00:12:17.063 ' 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:17.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.063 --rc genhtml_branch_coverage=1 00:12:17.063 --rc genhtml_function_coverage=1 00:12:17.063 --rc genhtml_legend=1 00:12:17.063 --rc geninfo_all_blocks=1 00:12:17.063 --rc geninfo_unexecuted_blocks=1 00:12:17.063 00:12:17.063 ' 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:17.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.063 --rc genhtml_branch_coverage=1 00:12:17.063 --rc genhtml_function_coverage=1 00:12:17.063 --rc genhtml_legend=1 00:12:17.063 --rc geninfo_all_blocks=1 00:12:17.063 --rc geninfo_unexecuted_blocks=1 00:12:17.063 00:12:17.063 ' 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:17.063 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:17.063 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:25.204 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:25.204 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:25.204 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:25.204 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # rdma_device_init 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:25.204 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:25.205 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:25.205 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:25.205 altname enp217s0f0np0 00:12:25.205 altname ens818f0np0 00:12:25.205 inet 192.168.100.8/24 scope global mlx_0_0 00:12:25.205 valid_lft forever preferred_lft forever 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:25.205 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:25.205 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:25.205 altname enp217s0f1np1 00:12:25.205 altname ens818f1np1 00:12:25.205 inet 192.168.100.9/24 scope global mlx_0_1 00:12:25.205 valid_lft forever preferred_lft forever 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:25.205 192.168.100.9' 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:25.205 192.168.100.9' 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # head -n 1 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:25.205 192.168.100.9' 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # tail -n +2 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # head -n 1 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=221235 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 221235 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 221235 ']' 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:25.205 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.205 [2024-12-13 19:06:58.771178] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:25.205 [2024-12-13 19:06:58.771228] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.205 [2024-12-13 19:06:58.853947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.205 [2024-12-13 19:06:58.883260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.205 [2024-12-13 19:06:58.883300] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.205 [2024-12-13 19:06:58.883314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.206 [2024-12-13 19:06:58.883324] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.206 [2024-12-13 19:06:58.883333] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.206 [2024-12-13 19:06:58.885494] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.206 [2024-12-13 19:06:58.885606] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.206 [2024-12-13 19:06:58.885718] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.206 [2024-12-13 19:06:58.885718] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.206 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.206 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:25.206 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:25.206 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:25.206 19:06:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.206 [2024-12-13 19:06:59.069436] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12e4540/0x12e89f0) succeed. 00:12:25.206 [2024-12-13 19:06:59.078601] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12e5b80/0x132a090) succeed. 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.206 [2024-12-13 19:06:59.218545] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:25.206 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:25.467 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:25.727 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:25.727 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:25.727 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:25.727 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:25.727 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:25.727 19:06:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:25.727 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:25.727 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:25.727 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.727 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.727 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.727 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:25.727 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:25.727 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:25.727 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:25.727 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.727 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.727 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:25.727 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.727 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:25.727 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:25.727 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:25.727 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:25.727 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:25.727 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:25.727 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:25.727 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:25.987 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:25.987 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:25.987 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:25.987 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:25.987 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:25.987 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:25.987 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:25.987 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:25.988 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:25.988 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:25.988 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:25.988 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:25.988 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:26.256 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:26.256 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:26.256 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.256 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.256 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.256 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:26.256 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:26.256 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.256 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.256 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.256 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:26.256 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:26.256 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:26.256 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:26.256 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:26.256 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:26.256 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:26.256 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:26.256 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:26.256 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:26.256 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:26.256 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:26.257 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:26.257 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:26.257 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:26.257 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:26.257 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:26.257 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:26.257 rmmod nvme_rdma 00:12:26.257 rmmod nvme_fabrics 00:12:26.257 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:26.257 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:26.257 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:26.257 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 221235 ']' 00:12:26.257 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 221235 00:12:26.257 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 221235 ']' 00:12:26.257 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 221235 00:12:26.257 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:26.257 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:26.257 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 221235 00:12:26.517 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:26.517 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:26.517 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 221235' 00:12:26.517 killing process with pid 221235 00:12:26.517 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 221235 00:12:26.517 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 221235 00:12:26.777 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:26.777 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:26.777 00:12:26.777 real 0m9.778s 00:12:26.777 user 0m10.938s 00:12:26.777 sys 0m6.464s 00:12:26.777 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.777 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.777 ************************************ 00:12:26.777 END TEST nvmf_referrals 00:12:26.777 ************************************ 00:12:26.777 19:07:00 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:12:26.777 19:07:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:26.777 19:07:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.777 19:07:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:26.777 ************************************ 00:12:26.777 START TEST nvmf_connect_disconnect 00:12:26.777 ************************************ 00:12:26.777 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:12:26.777 * Looking for test storage... 00:12:26.777 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:26.777 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:26.777 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:26.777 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:27.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.039 --rc genhtml_branch_coverage=1 00:12:27.039 --rc genhtml_function_coverage=1 00:12:27.039 --rc genhtml_legend=1 00:12:27.039 --rc geninfo_all_blocks=1 00:12:27.039 --rc geninfo_unexecuted_blocks=1 00:12:27.039 00:12:27.039 ' 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:27.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.039 --rc genhtml_branch_coverage=1 00:12:27.039 --rc genhtml_function_coverage=1 00:12:27.039 --rc genhtml_legend=1 00:12:27.039 --rc geninfo_all_blocks=1 00:12:27.039 --rc geninfo_unexecuted_blocks=1 00:12:27.039 00:12:27.039 ' 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:27.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.039 --rc genhtml_branch_coverage=1 00:12:27.039 --rc genhtml_function_coverage=1 00:12:27.039 --rc genhtml_legend=1 00:12:27.039 --rc geninfo_all_blocks=1 00:12:27.039 --rc geninfo_unexecuted_blocks=1 00:12:27.039 00:12:27.039 ' 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:27.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.039 --rc genhtml_branch_coverage=1 00:12:27.039 --rc genhtml_function_coverage=1 00:12:27.039 --rc genhtml_legend=1 00:12:27.039 --rc geninfo_all_blocks=1 00:12:27.039 --rc geninfo_unexecuted_blocks=1 00:12:27.039 00:12:27.039 ' 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.039 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:27.040 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:27.040 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:35.181 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:35.181 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:35.181 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:35.181 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:35.181 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:35.182 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:35.182 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:35.182 altname enp217s0f0np0 00:12:35.182 altname ens818f0np0 00:12:35.182 inet 192.168.100.8/24 scope global mlx_0_0 00:12:35.182 valid_lft forever preferred_lft forever 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:35.182 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:35.182 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:35.182 altname enp217s0f1np1 00:12:35.182 altname ens818f1np1 00:12:35.182 inet 192.168.100.9/24 scope global mlx_0_1 00:12:35.182 valid_lft forever preferred_lft forever 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:35.182 192.168.100.9' 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:35.182 192.168.100.9' 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:35.182 192.168.100.9' 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=225082 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 225082 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 225082 ']' 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:35.182 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.182 [2024-12-13 19:07:08.640613] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:35.182 [2024-12-13 19:07:08.640669] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.182 [2024-12-13 19:07:08.733615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.182 [2024-12-13 19:07:08.756449] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.183 [2024-12-13 19:07:08.756485] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.183 [2024-12-13 19:07:08.756494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.183 [2024-12-13 19:07:08.756502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.183 [2024-12-13 19:07:08.756509] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.183 [2024-12-13 19:07:08.758070] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.183 [2024-12-13 19:07:08.758133] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.183 [2024-12-13 19:07:08.758244] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.183 [2024-12-13 19:07:08.758245] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.183 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:35.183 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:35.183 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:35.183 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:35.183 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.183 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.183 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:12:35.183 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.183 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.183 [2024-12-13 19:07:08.899578] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:12:35.183 [2024-12-13 19:07:08.921096] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e60540/0x1e649f0) succeed. 00:12:35.183 [2024-12-13 19:07:08.930378] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e61b80/0x1ea6090) succeed. 00:12:35.183 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.183 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:35.183 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.183 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.183 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.183 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:35.183 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:35.183 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.183 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.183 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.183 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:35.183 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.183 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.183 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.183 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:35.183 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.183 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.183 [2024-12-13 19:07:09.083195] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:35.183 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.183 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:35.183 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:35.183 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:35.183 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:38.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:05.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:21.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:25.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:34.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:37.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:47.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:50.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:50.432 rmmod nvme_rdma 00:17:50.432 rmmod nvme_fabrics 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 225082 ']' 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 225082 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 225082 ']' 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 225082 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 225082 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 225082' 00:17:50.432 killing process with pid 225082 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 225082 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 225082 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:50.432 00:17:50.432 real 5m23.516s 00:17:50.432 user 21m0.286s 00:17:50.432 sys 0m18.591s 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:50.432 ************************************ 00:17:50.432 END TEST nvmf_connect_disconnect 00:17:50.432 ************************************ 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:50.432 ************************************ 00:17:50.432 START TEST nvmf_multitarget 00:17:50.432 ************************************ 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:17:50.432 * Looking for test storage... 00:17:50.432 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:17:50.432 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:50.433 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:50.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.694 --rc genhtml_branch_coverage=1 00:17:50.694 --rc genhtml_function_coverage=1 00:17:50.694 --rc genhtml_legend=1 00:17:50.694 --rc geninfo_all_blocks=1 00:17:50.694 --rc geninfo_unexecuted_blocks=1 00:17:50.694 00:17:50.694 ' 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:50.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.694 --rc genhtml_branch_coverage=1 00:17:50.694 --rc genhtml_function_coverage=1 00:17:50.694 --rc genhtml_legend=1 00:17:50.694 --rc geninfo_all_blocks=1 00:17:50.694 --rc geninfo_unexecuted_blocks=1 00:17:50.694 00:17:50.694 ' 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:50.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.694 --rc genhtml_branch_coverage=1 00:17:50.694 --rc genhtml_function_coverage=1 00:17:50.694 --rc genhtml_legend=1 00:17:50.694 --rc geninfo_all_blocks=1 00:17:50.694 --rc geninfo_unexecuted_blocks=1 00:17:50.694 00:17:50.694 ' 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:50.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.694 --rc genhtml_branch_coverage=1 00:17:50.694 --rc genhtml_function_coverage=1 00:17:50.694 --rc genhtml_legend=1 00:17:50.694 --rc geninfo_all_blocks=1 00:17:50.694 --rc geninfo_unexecuted_blocks=1 00:17:50.694 00:17:50.694 ' 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.694 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:50.695 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:17:50.695 19:12:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:58.835 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:58.835 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:58.835 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:58.835 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.835 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # rdma_device_init 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:58.836 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:58.836 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:58.836 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:58.836 altname enp217s0f0np0 00:17:58.836 altname ens818f0np0 00:17:58.836 inet 192.168.100.8/24 scope global mlx_0_0 00:17:58.836 valid_lft forever preferred_lft forever 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:58.836 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:58.836 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:58.836 altname enp217s0f1np1 00:17:58.836 altname ens818f1np1 00:17:58.836 inet 192.168.100.9/24 scope global mlx_0_1 00:17:58.836 valid_lft forever preferred_lft forever 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:58.836 192.168.100.9' 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:58.836 192.168.100.9' 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # head -n 1 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:58.836 192.168.100.9' 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # tail -n +2 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # head -n 1 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:58.836 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=284612 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 284612 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 284612 ']' 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:58.837 [2024-12-13 19:12:32.193545] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:58.837 [2024-12-13 19:12:32.193594] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.837 [2024-12-13 19:12:32.287634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:58.837 [2024-12-13 19:12:32.310193] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.837 [2024-12-13 19:12:32.310229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.837 [2024-12-13 19:12:32.310238] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.837 [2024-12-13 19:12:32.310246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.837 [2024-12-13 19:12:32.310253] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.837 [2024-12-13 19:12:32.311837] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.837 [2024-12-13 19:12:32.311945] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.837 [2024-12-13 19:12:32.312072] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.837 [2024-12-13 19:12:32.312073] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:58.837 "nvmf_tgt_1" 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:58.837 "nvmf_tgt_2" 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:58.837 true 00:17:58.837 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:58.837 true 00:17:58.837 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:58.837 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:59.098 rmmod nvme_rdma 00:17:59.098 rmmod nvme_fabrics 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 284612 ']' 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 284612 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 284612 ']' 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 284612 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 284612 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 284612' 00:17:59.098 killing process with pid 284612 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 284612 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 284612 00:17:59.098 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:59.358 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:59.358 00:17:59.358 real 0m8.860s 00:17:59.358 user 0m7.432s 00:17:59.358 sys 0m6.151s 00:17:59.358 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.358 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:59.358 ************************************ 00:17:59.358 END TEST nvmf_multitarget 00:17:59.358 ************************************ 00:17:59.358 19:12:33 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:17:59.358 19:12:33 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:59.358 19:12:33 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:59.358 19:12:33 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:59.358 ************************************ 00:17:59.358 START TEST nvmf_rpc 00:17:59.358 ************************************ 00:17:59.358 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:17:59.358 * Looking for test storage... 00:17:59.358 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:59.358 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:59.358 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:17:59.358 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:59.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.620 --rc genhtml_branch_coverage=1 00:17:59.620 --rc genhtml_function_coverage=1 00:17:59.620 --rc genhtml_legend=1 00:17:59.620 --rc geninfo_all_blocks=1 00:17:59.620 --rc geninfo_unexecuted_blocks=1 00:17:59.620 00:17:59.620 ' 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:59.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.620 --rc genhtml_branch_coverage=1 00:17:59.620 --rc genhtml_function_coverage=1 00:17:59.620 --rc genhtml_legend=1 00:17:59.620 --rc geninfo_all_blocks=1 00:17:59.620 --rc geninfo_unexecuted_blocks=1 00:17:59.620 00:17:59.620 ' 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:59.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.620 --rc genhtml_branch_coverage=1 00:17:59.620 --rc genhtml_function_coverage=1 00:17:59.620 --rc genhtml_legend=1 00:17:59.620 --rc geninfo_all_blocks=1 00:17:59.620 --rc geninfo_unexecuted_blocks=1 00:17:59.620 00:17:59.620 ' 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:59.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.620 --rc genhtml_branch_coverage=1 00:17:59.620 --rc genhtml_function_coverage=1 00:17:59.620 --rc genhtml_legend=1 00:17:59.620 --rc geninfo_all_blocks=1 00:17:59.620 --rc geninfo_unexecuted_blocks=1 00:17:59.620 00:17:59.620 ' 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:59.620 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:59.621 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:59.621 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:07.754 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:07.754 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:07.754 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:07.754 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # rdma_device_init 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:07.754 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:07.755 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:07.755 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:07.755 altname enp217s0f0np0 00:18:07.755 altname ens818f0np0 00:18:07.755 inet 192.168.100.8/24 scope global mlx_0_0 00:18:07.755 valid_lft forever preferred_lft forever 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:07.755 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:07.755 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:07.755 altname enp217s0f1np1 00:18:07.755 altname ens818f1np1 00:18:07.755 inet 192.168.100.9/24 scope global mlx_0_1 00:18:07.755 valid_lft forever preferred_lft forever 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:07.755 192.168.100.9' 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:07.755 192.168.100.9' 00:18:07.755 19:12:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # head -n 1 00:18:07.755 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:07.755 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:07.755 192.168.100.9' 00:18:07.755 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # tail -n +2 00:18:07.755 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # head -n 1 00:18:07.755 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:07.755 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:07.755 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:07.755 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:07.755 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:07.755 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:07.755 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:18:07.755 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:07.755 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:07.755 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.755 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=288205 00:18:07.755 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:07.755 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 288205 00:18:07.755 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 288205 ']' 00:18:07.755 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.755 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.755 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.755 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.755 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.755 [2024-12-13 19:12:41.101901] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:07.755 [2024-12-13 19:12:41.101955] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.755 [2024-12-13 19:12:41.194070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:07.755 [2024-12-13 19:12:41.216828] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.756 [2024-12-13 19:12:41.216864] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.756 [2024-12-13 19:12:41.216874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.756 [2024-12-13 19:12:41.216882] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.756 [2024-12-13 19:12:41.216889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.756 [2024-12-13 19:12:41.222059] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.756 [2024-12-13 19:12:41.222104] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.756 [2024-12-13 19:12:41.222133] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.756 [2024-12-13 19:12:41.222134] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:18:07.756 "tick_rate": 2500000000, 00:18:07.756 "poll_groups": [ 00:18:07.756 { 00:18:07.756 "name": "nvmf_tgt_poll_group_000", 00:18:07.756 "admin_qpairs": 0, 00:18:07.756 "io_qpairs": 0, 00:18:07.756 "current_admin_qpairs": 0, 00:18:07.756 "current_io_qpairs": 0, 00:18:07.756 "pending_bdev_io": 0, 00:18:07.756 "completed_nvme_io": 0, 00:18:07.756 "transports": [] 00:18:07.756 }, 00:18:07.756 { 00:18:07.756 "name": "nvmf_tgt_poll_group_001", 00:18:07.756 "admin_qpairs": 0, 00:18:07.756 "io_qpairs": 0, 00:18:07.756 "current_admin_qpairs": 0, 00:18:07.756 "current_io_qpairs": 0, 00:18:07.756 "pending_bdev_io": 0, 00:18:07.756 "completed_nvme_io": 0, 00:18:07.756 "transports": [] 00:18:07.756 }, 00:18:07.756 { 00:18:07.756 "name": "nvmf_tgt_poll_group_002", 00:18:07.756 "admin_qpairs": 0, 00:18:07.756 "io_qpairs": 0, 00:18:07.756 "current_admin_qpairs": 0, 00:18:07.756 "current_io_qpairs": 0, 00:18:07.756 "pending_bdev_io": 0, 00:18:07.756 "completed_nvme_io": 0, 00:18:07.756 "transports": [] 00:18:07.756 }, 00:18:07.756 { 00:18:07.756 "name": "nvmf_tgt_poll_group_003", 00:18:07.756 "admin_qpairs": 0, 00:18:07.756 "io_qpairs": 0, 00:18:07.756 "current_admin_qpairs": 0, 00:18:07.756 "current_io_qpairs": 0, 00:18:07.756 "pending_bdev_io": 0, 00:18:07.756 "completed_nvme_io": 0, 00:18:07.756 "transports": [] 00:18:07.756 } 00:18:07.756 ] 00:18:07.756 }' 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.756 [2024-12-13 19:12:41.513934] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22a45a0/0x22a8a50) succeed. 00:18:07.756 [2024-12-13 19:12:41.523168] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22a5be0/0x22ea0f0) succeed. 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.756 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:18:07.756 "tick_rate": 2500000000, 00:18:07.756 "poll_groups": [ 00:18:07.756 { 00:18:07.756 "name": "nvmf_tgt_poll_group_000", 00:18:07.756 "admin_qpairs": 0, 00:18:07.756 "io_qpairs": 0, 00:18:07.756 "current_admin_qpairs": 0, 00:18:07.756 "current_io_qpairs": 0, 00:18:07.756 "pending_bdev_io": 0, 00:18:07.756 "completed_nvme_io": 0, 00:18:07.756 "transports": [ 00:18:07.756 { 00:18:07.756 "trtype": "RDMA", 00:18:07.756 "pending_data_buffer": 0, 00:18:07.756 "devices": [ 00:18:07.756 { 00:18:07.756 "name": "mlx5_0", 00:18:07.756 "polls": 15618, 00:18:07.756 "idle_polls": 15618, 00:18:07.756 "completions": 0, 00:18:07.756 "requests": 0, 00:18:07.756 "request_latency": 0, 00:18:07.756 "pending_free_request": 0, 00:18:07.756 "pending_rdma_read": 0, 00:18:07.756 "pending_rdma_write": 0, 00:18:07.756 "pending_rdma_send": 0, 00:18:07.756 "total_send_wrs": 0, 00:18:07.756 "send_doorbell_updates": 0, 00:18:07.756 "total_recv_wrs": 4096, 00:18:07.756 "recv_doorbell_updates": 1 00:18:07.756 }, 00:18:07.756 { 00:18:07.756 "name": "mlx5_1", 00:18:07.756 "polls": 15618, 00:18:07.756 "idle_polls": 15618, 00:18:07.756 "completions": 0, 00:18:07.756 "requests": 0, 00:18:07.756 "request_latency": 0, 00:18:07.756 "pending_free_request": 0, 00:18:07.756 "pending_rdma_read": 0, 00:18:07.756 "pending_rdma_write": 0, 00:18:07.756 "pending_rdma_send": 0, 00:18:07.756 "total_send_wrs": 0, 00:18:07.756 "send_doorbell_updates": 0, 00:18:07.756 "total_recv_wrs": 4096, 00:18:07.756 "recv_doorbell_updates": 1 00:18:07.756 } 00:18:07.756 ] 00:18:07.756 } 00:18:07.756 ] 00:18:07.756 }, 00:18:07.756 { 00:18:07.756 "name": "nvmf_tgt_poll_group_001", 00:18:07.756 "admin_qpairs": 0, 00:18:07.756 "io_qpairs": 0, 00:18:07.756 "current_admin_qpairs": 0, 00:18:07.756 "current_io_qpairs": 0, 00:18:07.756 "pending_bdev_io": 0, 00:18:07.756 "completed_nvme_io": 0, 00:18:07.756 "transports": [ 00:18:07.756 { 00:18:07.756 "trtype": "RDMA", 00:18:07.756 "pending_data_buffer": 0, 00:18:07.756 "devices": [ 00:18:07.756 { 00:18:07.756 "name": "mlx5_0", 00:18:07.756 "polls": 9718, 00:18:07.756 "idle_polls": 9718, 00:18:07.756 "completions": 0, 00:18:07.756 "requests": 0, 00:18:07.756 "request_latency": 0, 00:18:07.756 "pending_free_request": 0, 00:18:07.756 "pending_rdma_read": 0, 00:18:07.756 "pending_rdma_write": 0, 00:18:07.756 "pending_rdma_send": 0, 00:18:07.756 "total_send_wrs": 0, 00:18:07.756 "send_doorbell_updates": 0, 00:18:07.756 "total_recv_wrs": 4096, 00:18:07.756 "recv_doorbell_updates": 1 00:18:07.756 }, 00:18:07.756 { 00:18:07.756 "name": "mlx5_1", 00:18:07.756 "polls": 9718, 00:18:07.756 "idle_polls": 9718, 00:18:07.756 "completions": 0, 00:18:07.756 "requests": 0, 00:18:07.756 "request_latency": 0, 00:18:07.756 "pending_free_request": 0, 00:18:07.756 "pending_rdma_read": 0, 00:18:07.756 "pending_rdma_write": 0, 00:18:07.756 "pending_rdma_send": 0, 00:18:07.756 "total_send_wrs": 0, 00:18:07.756 "send_doorbell_updates": 0, 00:18:07.756 "total_recv_wrs": 4096, 00:18:07.756 "recv_doorbell_updates": 1 00:18:07.756 } 00:18:07.756 ] 00:18:07.756 } 00:18:07.756 ] 00:18:07.756 }, 00:18:07.756 { 00:18:07.756 "name": "nvmf_tgt_poll_group_002", 00:18:07.756 "admin_qpairs": 0, 00:18:07.756 "io_qpairs": 0, 00:18:07.756 "current_admin_qpairs": 0, 00:18:07.756 "current_io_qpairs": 0, 00:18:07.756 "pending_bdev_io": 0, 00:18:07.756 "completed_nvme_io": 0, 00:18:07.756 "transports": [ 00:18:07.756 { 00:18:07.756 "trtype": "RDMA", 00:18:07.756 "pending_data_buffer": 0, 00:18:07.756 "devices": [ 00:18:07.756 { 00:18:07.756 "name": "mlx5_0", 00:18:07.756 "polls": 5470, 00:18:07.756 "idle_polls": 5470, 00:18:07.756 "completions": 0, 00:18:07.756 "requests": 0, 00:18:07.756 "request_latency": 0, 00:18:07.756 "pending_free_request": 0, 00:18:07.756 "pending_rdma_read": 0, 00:18:07.756 "pending_rdma_write": 0, 00:18:07.756 "pending_rdma_send": 0, 00:18:07.756 "total_send_wrs": 0, 00:18:07.756 "send_doorbell_updates": 0, 00:18:07.756 "total_recv_wrs": 4096, 00:18:07.756 "recv_doorbell_updates": 1 00:18:07.756 }, 00:18:07.756 { 00:18:07.756 "name": "mlx5_1", 00:18:07.756 "polls": 5470, 00:18:07.756 "idle_polls": 5470, 00:18:07.756 "completions": 0, 00:18:07.756 "requests": 0, 00:18:07.756 "request_latency": 0, 00:18:07.756 "pending_free_request": 0, 00:18:07.756 "pending_rdma_read": 0, 00:18:07.756 "pending_rdma_write": 0, 00:18:07.756 "pending_rdma_send": 0, 00:18:07.756 "total_send_wrs": 0, 00:18:07.756 "send_doorbell_updates": 0, 00:18:07.756 "total_recv_wrs": 4096, 00:18:07.756 "recv_doorbell_updates": 1 00:18:07.756 } 00:18:07.756 ] 00:18:07.756 } 00:18:07.756 ] 00:18:07.756 }, 00:18:07.756 { 00:18:07.756 "name": "nvmf_tgt_poll_group_003", 00:18:07.756 "admin_qpairs": 0, 00:18:07.756 "io_qpairs": 0, 00:18:07.756 "current_admin_qpairs": 0, 00:18:07.756 "current_io_qpairs": 0, 00:18:07.756 "pending_bdev_io": 0, 00:18:07.756 "completed_nvme_io": 0, 00:18:07.756 "transports": [ 00:18:07.756 { 00:18:07.756 "trtype": "RDMA", 00:18:07.757 "pending_data_buffer": 0, 00:18:07.757 "devices": [ 00:18:07.757 { 00:18:07.757 "name": "mlx5_0", 00:18:07.757 "polls": 887, 00:18:07.757 "idle_polls": 887, 00:18:07.757 "completions": 0, 00:18:07.757 "requests": 0, 00:18:07.757 "request_latency": 0, 00:18:07.757 "pending_free_request": 0, 00:18:07.757 "pending_rdma_read": 0, 00:18:07.757 "pending_rdma_write": 0, 00:18:07.757 "pending_rdma_send": 0, 00:18:07.757 "total_send_wrs": 0, 00:18:07.757 "send_doorbell_updates": 0, 00:18:07.757 "total_recv_wrs": 4096, 00:18:07.757 "recv_doorbell_updates": 1 00:18:07.757 }, 00:18:07.757 { 00:18:07.757 "name": "mlx5_1", 00:18:07.757 "polls": 887, 00:18:07.757 "idle_polls": 887, 00:18:07.757 "completions": 0, 00:18:07.757 "requests": 0, 00:18:07.757 "request_latency": 0, 00:18:07.757 "pending_free_request": 0, 00:18:07.757 "pending_rdma_read": 0, 00:18:07.757 "pending_rdma_write": 0, 00:18:07.757 "pending_rdma_send": 0, 00:18:07.757 "total_send_wrs": 0, 00:18:07.757 "send_doorbell_updates": 0, 00:18:07.757 "total_recv_wrs": 4096, 00:18:07.757 "recv_doorbell_updates": 1 00:18:07.757 } 00:18:07.757 ] 00:18:07.757 } 00:18:07.757 ] 00:18:07.757 } 00:18:07.757 ] 00:18:07.757 }' 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.757 Malloc1 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.757 [2024-12-13 19:12:41.944836] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:18:07.757 19:12:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:18:07.757 [2024-12-13 19:12:41.991167] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:18:07.757 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:07.757 could not add new controller: failed to write to nvme-fabrics device 00:18:07.757 19:12:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:18:07.757 19:12:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:07.757 19:12:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:07.757 19:12:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:07.757 19:12:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:07.757 19:12:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.757 19:12:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.757 19:12:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.757 19:12:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:08.696 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:18:08.696 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:08.696 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:08.696 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:08.696 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:11.236 19:12:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:11.236 19:12:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:11.236 19:12:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:11.236 19:12:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:11.236 19:12:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:11.236 19:12:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:11.236 19:12:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:11.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:11.804 [2024-12-13 19:12:46.102895] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:18:11.804 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:11.804 could not add new controller: failed to write to nvme-fabrics device 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.804 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:13.185 19:12:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:18:13.185 19:12:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:13.185 19:12:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:13.185 19:12:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:13.185 19:12:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:15.093 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:15.093 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:15.093 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:15.093 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:15.093 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:15.093 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:15.093 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:16.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:16.032 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:16.032 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:16.032 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:16.032 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:16.032 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:16.033 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:16.033 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:16.033 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:16.033 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.033 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.033 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.033 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:18:16.033 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:16.033 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:16.033 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.033 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.033 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.033 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:16.033 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.033 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.033 [2024-12-13 19:12:50.172703] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:16.033 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.033 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:16.033 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.033 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.033 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.033 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:16.033 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.033 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.033 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.033 19:12:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:16.973 19:12:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:16.973 19:12:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:16.973 19:12:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:16.973 19:12:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:16.973 19:12:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:18.885 19:12:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:18.885 19:12:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:18.885 19:12:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:18.885 19:12:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:18.885 19:12:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:18.885 19:12:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:18.885 19:12:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:19.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:19.826 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:19.826 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:19.826 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:19.826 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:19.826 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:19.826 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:19.826 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:19.826 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:19.826 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.826 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.086 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.086 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:20.086 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.086 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.086 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.086 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:20.087 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:20.087 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.087 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.087 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.087 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:20.087 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.087 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.087 [2024-12-13 19:12:54.227359] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:20.087 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.087 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:20.087 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.087 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.087 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.087 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:20.087 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.087 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.087 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.087 19:12:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:21.027 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:21.027 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:21.027 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:21.027 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:21.027 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:22.937 19:12:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:22.937 19:12:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:22.937 19:12:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:22.937 19:12:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:22.937 19:12:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:22.937 19:12:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:22.937 19:12:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:23.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:23.878 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:23.878 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:23.878 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:23.878 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:23.878 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:23.878 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:23.878 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:23.878 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:23.878 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.878 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:23.878 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.878 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:24.138 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.138 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.138 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.138 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:24.138 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:24.138 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.138 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.138 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.138 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:24.138 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.138 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.139 [2024-12-13 19:12:58.278129] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:24.139 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.139 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:24.139 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.139 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.139 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.139 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:24.139 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.139 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.139 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.139 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:25.080 19:12:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:25.080 19:12:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:25.080 19:12:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:25.080 19:12:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:25.080 19:12:59 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:26.991 19:13:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:26.991 19:13:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:26.991 19:13:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:26.991 19:13:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:26.991 19:13:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:26.991 19:13:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:26.991 19:13:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:27.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:27.932 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:27.932 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:27.932 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:27.932 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:27.932 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:27.932 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:27.932 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:27.932 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:27.932 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.932 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:27.932 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.932 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:27.932 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.932 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:27.932 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.932 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:27.933 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:27.933 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.933 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:27.933 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.933 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:27.933 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.933 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:27.933 [2024-12-13 19:13:02.302640] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:27.933 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.933 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:27.933 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.933 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.193 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.193 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:28.193 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.193 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.193 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.193 19:13:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:29.134 19:13:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:29.134 19:13:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:29.134 19:13:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:29.134 19:13:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:29.134 19:13:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:31.043 19:13:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:31.043 19:13:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:31.043 19:13:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:31.043 19:13:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:31.043 19:13:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:31.043 19:13:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:31.043 19:13:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:31.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:31.984 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:31.984 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:31.984 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:31.984 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:31.984 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:31.984 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:31.984 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:31.984 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:31.984 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.984 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:31.984 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.984 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:31.984 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.984 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:31.984 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.984 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:31.984 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:31.984 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.984 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:31.984 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.984 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:31.984 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.984 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:31.984 [2024-12-13 19:13:06.358318] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:32.245 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.245 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:32.245 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.245 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.245 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.245 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:32.245 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.245 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.245 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.245 19:13:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:33.186 19:13:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:33.186 19:13:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:33.186 19:13:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:33.186 19:13:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:33.186 19:13:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:35.098 19:13:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:35.098 19:13:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:35.098 19:13:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:35.098 19:13:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:35.098 19:13:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:35.098 19:13:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:35.098 19:13:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:36.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.039 [2024-12-13 19:13:10.395528] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.039 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.301 [2024-12-13 19:13:10.447664] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.301 [2024-12-13 19:13:10.495853] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.301 [2024-12-13 19:13:10.544021] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:36.301 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.302 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.302 [2024-12-13 19:13:10.596236] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:36.302 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.302 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:36.302 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.302 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.302 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.302 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:36.302 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.302 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.302 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.302 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:36.302 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.302 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.302 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.302 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:36.302 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.302 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.302 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.302 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:18:36.302 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.302 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.562 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.562 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:18:36.563 "tick_rate": 2500000000, 00:18:36.563 "poll_groups": [ 00:18:36.563 { 00:18:36.563 "name": "nvmf_tgt_poll_group_000", 00:18:36.563 "admin_qpairs": 2, 00:18:36.563 "io_qpairs": 27, 00:18:36.563 "current_admin_qpairs": 0, 00:18:36.563 "current_io_qpairs": 0, 00:18:36.563 "pending_bdev_io": 0, 00:18:36.563 "completed_nvme_io": 127, 00:18:36.563 "transports": [ 00:18:36.563 { 00:18:36.563 "trtype": "RDMA", 00:18:36.563 "pending_data_buffer": 0, 00:18:36.563 "devices": [ 00:18:36.563 { 00:18:36.563 "name": "mlx5_0", 00:18:36.563 "polls": 3496886, 00:18:36.563 "idle_polls": 3496562, 00:18:36.563 "completions": 365, 00:18:36.563 "requests": 182, 00:18:36.563 "request_latency": 37058134, 00:18:36.563 "pending_free_request": 0, 00:18:36.563 "pending_rdma_read": 0, 00:18:36.563 "pending_rdma_write": 0, 00:18:36.563 "pending_rdma_send": 0, 00:18:36.563 "total_send_wrs": 309, 00:18:36.563 "send_doorbell_updates": 159, 00:18:36.563 "total_recv_wrs": 4278, 00:18:36.563 "recv_doorbell_updates": 159 00:18:36.563 }, 00:18:36.563 { 00:18:36.563 "name": "mlx5_1", 00:18:36.563 "polls": 3496886, 00:18:36.563 "idle_polls": 3496886, 00:18:36.563 "completions": 0, 00:18:36.563 "requests": 0, 00:18:36.563 "request_latency": 0, 00:18:36.563 "pending_free_request": 0, 00:18:36.563 "pending_rdma_read": 0, 00:18:36.563 "pending_rdma_write": 0, 00:18:36.563 "pending_rdma_send": 0, 00:18:36.563 "total_send_wrs": 0, 00:18:36.563 "send_doorbell_updates": 0, 00:18:36.563 "total_recv_wrs": 4096, 00:18:36.563 "recv_doorbell_updates": 1 00:18:36.563 } 00:18:36.563 ] 00:18:36.563 } 00:18:36.563 ] 00:18:36.563 }, 00:18:36.563 { 00:18:36.563 "name": "nvmf_tgt_poll_group_001", 00:18:36.563 "admin_qpairs": 2, 00:18:36.563 "io_qpairs": 26, 00:18:36.563 "current_admin_qpairs": 0, 00:18:36.563 "current_io_qpairs": 0, 00:18:36.563 "pending_bdev_io": 0, 00:18:36.563 "completed_nvme_io": 77, 00:18:36.563 "transports": [ 00:18:36.563 { 00:18:36.563 "trtype": "RDMA", 00:18:36.563 "pending_data_buffer": 0, 00:18:36.563 "devices": [ 00:18:36.563 { 00:18:36.563 "name": "mlx5_0", 00:18:36.563 "polls": 3428316, 00:18:36.563 "idle_polls": 3428078, 00:18:36.563 "completions": 258, 00:18:36.563 "requests": 129, 00:18:36.563 "request_latency": 22191836, 00:18:36.563 "pending_free_request": 0, 00:18:36.563 "pending_rdma_read": 0, 00:18:36.563 "pending_rdma_write": 0, 00:18:36.563 "pending_rdma_send": 0, 00:18:36.563 "total_send_wrs": 204, 00:18:36.563 "send_doorbell_updates": 118, 00:18:36.563 "total_recv_wrs": 4225, 00:18:36.563 "recv_doorbell_updates": 119 00:18:36.563 }, 00:18:36.563 { 00:18:36.563 "name": "mlx5_1", 00:18:36.563 "polls": 3428316, 00:18:36.563 "idle_polls": 3428316, 00:18:36.563 "completions": 0, 00:18:36.563 "requests": 0, 00:18:36.563 "request_latency": 0, 00:18:36.563 "pending_free_request": 0, 00:18:36.563 "pending_rdma_read": 0, 00:18:36.563 "pending_rdma_write": 0, 00:18:36.563 "pending_rdma_send": 0, 00:18:36.563 "total_send_wrs": 0, 00:18:36.563 "send_doorbell_updates": 0, 00:18:36.563 "total_recv_wrs": 4096, 00:18:36.563 "recv_doorbell_updates": 1 00:18:36.563 } 00:18:36.563 ] 00:18:36.563 } 00:18:36.563 ] 00:18:36.563 }, 00:18:36.563 { 00:18:36.563 "name": "nvmf_tgt_poll_group_002", 00:18:36.563 "admin_qpairs": 1, 00:18:36.563 "io_qpairs": 26, 00:18:36.563 "current_admin_qpairs": 0, 00:18:36.563 "current_io_qpairs": 0, 00:18:36.563 "pending_bdev_io": 0, 00:18:36.563 "completed_nvme_io": 76, 00:18:36.563 "transports": [ 00:18:36.563 { 00:18:36.563 "trtype": "RDMA", 00:18:36.563 "pending_data_buffer": 0, 00:18:36.563 "devices": [ 00:18:36.563 { 00:18:36.563 "name": "mlx5_0", 00:18:36.563 "polls": 3613656, 00:18:36.563 "idle_polls": 3613466, 00:18:36.563 "completions": 209, 00:18:36.563 "requests": 104, 00:18:36.563 "request_latency": 20075368, 00:18:36.563 "pending_free_request": 0, 00:18:36.563 "pending_rdma_read": 0, 00:18:36.563 "pending_rdma_write": 0, 00:18:36.563 "pending_rdma_send": 0, 00:18:36.563 "total_send_wrs": 168, 00:18:36.563 "send_doorbell_updates": 93, 00:18:36.563 "total_recv_wrs": 4200, 00:18:36.563 "recv_doorbell_updates": 93 00:18:36.563 }, 00:18:36.563 { 00:18:36.563 "name": "mlx5_1", 00:18:36.563 "polls": 3613656, 00:18:36.563 "idle_polls": 3613656, 00:18:36.563 "completions": 0, 00:18:36.563 "requests": 0, 00:18:36.563 "request_latency": 0, 00:18:36.563 "pending_free_request": 0, 00:18:36.563 "pending_rdma_read": 0, 00:18:36.563 "pending_rdma_write": 0, 00:18:36.563 "pending_rdma_send": 0, 00:18:36.563 "total_send_wrs": 0, 00:18:36.563 "send_doorbell_updates": 0, 00:18:36.563 "total_recv_wrs": 4096, 00:18:36.563 "recv_doorbell_updates": 1 00:18:36.563 } 00:18:36.563 ] 00:18:36.563 } 00:18:36.563 ] 00:18:36.563 }, 00:18:36.563 { 00:18:36.563 "name": "nvmf_tgt_poll_group_003", 00:18:36.563 "admin_qpairs": 2, 00:18:36.563 "io_qpairs": 26, 00:18:36.563 "current_admin_qpairs": 0, 00:18:36.563 "current_io_qpairs": 0, 00:18:36.563 "pending_bdev_io": 0, 00:18:36.563 "completed_nvme_io": 175, 00:18:36.563 "transports": [ 00:18:36.563 { 00:18:36.563 "trtype": "RDMA", 00:18:36.563 "pending_data_buffer": 0, 00:18:36.563 "devices": [ 00:18:36.563 { 00:18:36.563 "name": "mlx5_0", 00:18:36.563 "polls": 2747148, 00:18:36.563 "idle_polls": 2746755, 00:18:36.563 "completions": 454, 00:18:36.563 "requests": 227, 00:18:36.563 "request_latency": 52807924, 00:18:36.563 "pending_free_request": 0, 00:18:36.563 "pending_rdma_read": 0, 00:18:36.563 "pending_rdma_write": 0, 00:18:36.563 "pending_rdma_send": 0, 00:18:36.563 "total_send_wrs": 400, 00:18:36.563 "send_doorbell_updates": 190, 00:18:36.563 "total_recv_wrs": 4323, 00:18:36.563 "recv_doorbell_updates": 191 00:18:36.563 }, 00:18:36.563 { 00:18:36.563 "name": "mlx5_1", 00:18:36.563 "polls": 2747148, 00:18:36.563 "idle_polls": 2747148, 00:18:36.563 "completions": 0, 00:18:36.563 "requests": 0, 00:18:36.563 "request_latency": 0, 00:18:36.563 "pending_free_request": 0, 00:18:36.563 "pending_rdma_read": 0, 00:18:36.563 "pending_rdma_write": 0, 00:18:36.563 "pending_rdma_send": 0, 00:18:36.563 "total_send_wrs": 0, 00:18:36.563 "send_doorbell_updates": 0, 00:18:36.563 "total_recv_wrs": 4096, 00:18:36.563 "recv_doorbell_updates": 1 00:18:36.563 } 00:18:36.563 ] 00:18:36.563 } 00:18:36.563 ] 00:18:36.563 } 00:18:36.563 ] 00:18:36.563 }' 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1286 > 0 )) 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 132133262 > 0 )) 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:36.563 rmmod nvme_rdma 00:18:36.563 rmmod nvme_fabrics 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 288205 ']' 00:18:36.563 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 288205 00:18:36.564 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 288205 ']' 00:18:36.564 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 288205 00:18:36.564 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:18:36.564 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.564 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 288205 00:18:36.824 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:36.824 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:36.824 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 288205' 00:18:36.824 killing process with pid 288205 00:18:36.824 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 288205 00:18:36.824 19:13:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 288205 00:18:37.084 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:37.084 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:37.084 00:18:37.084 real 0m37.676s 00:18:37.084 user 2m1.968s 00:18:37.084 sys 0m7.346s 00:18:37.084 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:37.084 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.084 ************************************ 00:18:37.084 END TEST nvmf_rpc 00:18:37.084 ************************************ 00:18:37.084 19:13:11 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:18:37.085 19:13:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:37.085 19:13:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:37.085 19:13:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:37.085 ************************************ 00:18:37.085 START TEST nvmf_invalid 00:18:37.085 ************************************ 00:18:37.085 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:18:37.085 * Looking for test storage... 00:18:37.085 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:37.085 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:37.085 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:18:37.085 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:37.346 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:37.346 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:37.346 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:37.346 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:37.346 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:18:37.346 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:18:37.346 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:18:37.346 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:18:37.346 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:18:37.346 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:18:37.346 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:18:37.346 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:37.346 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:18:37.346 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:18:37.346 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:37.346 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:37.346 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:18:37.346 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:37.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.347 --rc genhtml_branch_coverage=1 00:18:37.347 --rc genhtml_function_coverage=1 00:18:37.347 --rc genhtml_legend=1 00:18:37.347 --rc geninfo_all_blocks=1 00:18:37.347 --rc geninfo_unexecuted_blocks=1 00:18:37.347 00:18:37.347 ' 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:37.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.347 --rc genhtml_branch_coverage=1 00:18:37.347 --rc genhtml_function_coverage=1 00:18:37.347 --rc genhtml_legend=1 00:18:37.347 --rc geninfo_all_blocks=1 00:18:37.347 --rc geninfo_unexecuted_blocks=1 00:18:37.347 00:18:37.347 ' 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:37.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.347 --rc genhtml_branch_coverage=1 00:18:37.347 --rc genhtml_function_coverage=1 00:18:37.347 --rc genhtml_legend=1 00:18:37.347 --rc geninfo_all_blocks=1 00:18:37.347 --rc geninfo_unexecuted_blocks=1 00:18:37.347 00:18:37.347 ' 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:37.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.347 --rc genhtml_branch_coverage=1 00:18:37.347 --rc genhtml_function_coverage=1 00:18:37.347 --rc genhtml_legend=1 00:18:37.347 --rc geninfo_all_blocks=1 00:18:37.347 --rc geninfo_unexecuted_blocks=1 00:18:37.347 00:18:37.347 ' 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:37.347 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:18:37.347 19:13:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:45.489 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:45.489 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:45.489 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:45.489 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # rdma_device_init 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:45.489 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:45.490 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:45.490 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:45.490 altname enp217s0f0np0 00:18:45.490 altname ens818f0np0 00:18:45.490 inet 192.168.100.8/24 scope global mlx_0_0 00:18:45.490 valid_lft forever preferred_lft forever 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:45.490 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:45.490 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:45.490 altname enp217s0f1np1 00:18:45.490 altname ens818f1np1 00:18:45.490 inet 192.168.100.9/24 scope global mlx_0_1 00:18:45.490 valid_lft forever preferred_lft forever 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:45.490 192.168.100.9' 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:45.490 192.168.100.9' 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # head -n 1 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:45.490 192.168.100.9' 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # tail -n +2 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # head -n 1 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=296859 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 296859 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 296859 ']' 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.490 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:45.490 [2024-12-13 19:13:18.915614] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:45.490 [2024-12-13 19:13:18.915679] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.490 [2024-12-13 19:13:19.009653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:45.490 [2024-12-13 19:13:19.032620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.490 [2024-12-13 19:13:19.032659] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.490 [2024-12-13 19:13:19.032668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.490 [2024-12-13 19:13:19.032677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.490 [2024-12-13 19:13:19.032684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.490 [2024-12-13 19:13:19.034237] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.490 [2024-12-13 19:13:19.034345] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.490 [2024-12-13 19:13:19.034453] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.490 [2024-12-13 19:13:19.034454] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode14093 00:18:45.491 [2024-12-13 19:13:19.351907] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:18:45.491 { 00:18:45.491 "nqn": "nqn.2016-06.io.spdk:cnode14093", 00:18:45.491 "tgt_name": "foobar", 00:18:45.491 "method": "nvmf_create_subsystem", 00:18:45.491 "req_id": 1 00:18:45.491 } 00:18:45.491 Got JSON-RPC error response 00:18:45.491 response: 00:18:45.491 { 00:18:45.491 "code": -32603, 00:18:45.491 "message": "Unable to find target foobar" 00:18:45.491 }' 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:18:45.491 { 00:18:45.491 "nqn": "nqn.2016-06.io.spdk:cnode14093", 00:18:45.491 "tgt_name": "foobar", 00:18:45.491 "method": "nvmf_create_subsystem", 00:18:45.491 "req_id": 1 00:18:45.491 } 00:18:45.491 Got JSON-RPC error response 00:18:45.491 response: 00:18:45.491 { 00:18:45.491 "code": -32603, 00:18:45.491 "message": "Unable to find target foobar" 00:18:45.491 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2809 00:18:45.491 [2024-12-13 19:13:19.564616] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2809: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:18:45.491 { 00:18:45.491 "nqn": "nqn.2016-06.io.spdk:cnode2809", 00:18:45.491 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:45.491 "method": "nvmf_create_subsystem", 00:18:45.491 "req_id": 1 00:18:45.491 } 00:18:45.491 Got JSON-RPC error response 00:18:45.491 response: 00:18:45.491 { 00:18:45.491 "code": -32602, 00:18:45.491 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:45.491 }' 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:18:45.491 { 00:18:45.491 "nqn": "nqn.2016-06.io.spdk:cnode2809", 00:18:45.491 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:45.491 "method": "nvmf_create_subsystem", 00:18:45.491 "req_id": 1 00:18:45.491 } 00:18:45.491 Got JSON-RPC error response 00:18:45.491 response: 00:18:45.491 { 00:18:45.491 "code": -32602, 00:18:45.491 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:45.491 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15963 00:18:45.491 [2024-12-13 19:13:19.777309] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15963: invalid model number 'SPDK_Controller' 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:18:45.491 { 00:18:45.491 "nqn": "nqn.2016-06.io.spdk:cnode15963", 00:18:45.491 "model_number": "SPDK_Controller\u001f", 00:18:45.491 "method": "nvmf_create_subsystem", 00:18:45.491 "req_id": 1 00:18:45.491 } 00:18:45.491 Got JSON-RPC error response 00:18:45.491 response: 00:18:45.491 { 00:18:45.491 "code": -32602, 00:18:45.491 "message": "Invalid MN SPDK_Controller\u001f" 00:18:45.491 }' 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:18:45.491 { 00:18:45.491 "nqn": "nqn.2016-06.io.spdk:cnode15963", 00:18:45.491 "model_number": "SPDK_Controller\u001f", 00:18:45.491 "method": "nvmf_create_subsystem", 00:18:45.491 "req_id": 1 00:18:45.491 } 00:18:45.491 Got JSON-RPC error response 00:18:45.491 response: 00:18:45.491 { 00:18:45.491 "code": -32602, 00:18:45.491 "message": "Invalid MN SPDK_Controller\u001f" 00:18:45.491 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:45.491 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:18:45.752 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:18:45.753 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:45.753 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:45.753 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ o == \- ]] 00:18:45.753 19:13:19 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'oQEvk ver2_l ? ver1_l : ver2_l) )) 00:18:48.884 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:18:48.884 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:18:48.884 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:48.884 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:18:48.884 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:18:48.884 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:18:48.884 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:18:48.884 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:48.884 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:18:48.884 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:18:48.884 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:48.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.885 --rc genhtml_branch_coverage=1 00:18:48.885 --rc genhtml_function_coverage=1 00:18:48.885 --rc genhtml_legend=1 00:18:48.885 --rc geninfo_all_blocks=1 00:18:48.885 --rc geninfo_unexecuted_blocks=1 00:18:48.885 00:18:48.885 ' 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:48.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.885 --rc genhtml_branch_coverage=1 00:18:48.885 --rc genhtml_function_coverage=1 00:18:48.885 --rc genhtml_legend=1 00:18:48.885 --rc geninfo_all_blocks=1 00:18:48.885 --rc geninfo_unexecuted_blocks=1 00:18:48.885 00:18:48.885 ' 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:48.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.885 --rc genhtml_branch_coverage=1 00:18:48.885 --rc genhtml_function_coverage=1 00:18:48.885 --rc genhtml_legend=1 00:18:48.885 --rc geninfo_all_blocks=1 00:18:48.885 --rc geninfo_unexecuted_blocks=1 00:18:48.885 00:18:48.885 ' 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:48.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.885 --rc genhtml_branch_coverage=1 00:18:48.885 --rc genhtml_function_coverage=1 00:18:48.885 --rc genhtml_legend=1 00:18:48.885 --rc geninfo_all_blocks=1 00:18:48.885 --rc geninfo_unexecuted_blocks=1 00:18:48.885 00:18:48.885 ' 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:48.885 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:48.885 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.886 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:48.886 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.146 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:49.146 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:49.146 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:18:49.146 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:57.287 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:57.287 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:57.287 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:57.287 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:18:57.287 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:57.288 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:57.288 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:57.288 altname enp217s0f0np0 00:18:57.288 altname ens818f0np0 00:18:57.288 inet 192.168.100.8/24 scope global mlx_0_0 00:18:57.288 valid_lft forever preferred_lft forever 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:57.288 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:57.288 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:57.288 altname enp217s0f1np1 00:18:57.288 altname ens818f1np1 00:18:57.288 inet 192.168.100.9/24 scope global mlx_0_1 00:18:57.288 valid_lft forever preferred_lft forever 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:57.288 192.168.100.9' 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:57.288 192.168.100.9' 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # head -n 1 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:57.288 192.168.100.9' 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # tail -n +2 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # head -n 1 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=301020 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 301020 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 301020 ']' 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:57.288 [2024-12-13 19:13:30.558509] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:57.288 [2024-12-13 19:13:30.558558] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.288 [2024-12-13 19:13:30.651829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:57.288 [2024-12-13 19:13:30.674114] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.288 [2024-12-13 19:13:30.674152] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.288 [2024-12-13 19:13:30.674161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.288 [2024-12-13 19:13:30.674169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.288 [2024-12-13 19:13:30.674176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.288 [2024-12-13 19:13:30.675767] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:57.288 [2024-12-13 19:13:30.675848] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.288 [2024-12-13 19:13:30.675849] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.288 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:57.289 [2024-12-13 19:13:30.836753] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdfdc40/0xe020f0) succeed. 00:18:57.289 [2024-12-13 19:13:30.845905] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdff1e0/0xe43790) succeed. 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:57.289 [2024-12-13 19:13:30.967549] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:57.289 NULL1 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=301251 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:57.289 19:13:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.289 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:57.549 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.549 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:18:57.549 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:57.549 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.549 19:13:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:57.810 19:13:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.810 19:13:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:18:57.810 19:13:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:57.810 19:13:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.810 19:13:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:58.070 19:13:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.070 19:13:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:18:58.070 19:13:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:58.070 19:13:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.070 19:13:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:58.641 19:13:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.641 19:13:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:18:58.641 19:13:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:58.641 19:13:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.641 19:13:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:58.901 19:13:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.901 19:13:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:18:58.901 19:13:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:58.901 19:13:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.901 19:13:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:59.162 19:13:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.162 19:13:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:18:59.162 19:13:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:59.162 19:13:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.162 19:13:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:59.422 19:13:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.422 19:13:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:18:59.422 19:13:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:59.422 19:13:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.422 19:13:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:59.683 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.683 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:18:59.683 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:59.683 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.683 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:00.253 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.253 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:19:00.253 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:00.253 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.253 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:00.513 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.513 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:19:00.513 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:00.513 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.513 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:00.773 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.773 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:19:00.773 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:00.773 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.773 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:01.033 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.033 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:19:01.033 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:01.033 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.033 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:01.293 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.293 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:19:01.293 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:01.293 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.293 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:01.863 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.863 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:19:01.863 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:01.863 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.863 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:02.123 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.123 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:19:02.123 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:02.123 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.123 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:02.383 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.383 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:19:02.383 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:02.383 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.383 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:02.643 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.643 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:19:02.643 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:02.643 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.643 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:03.213 19:13:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.213 19:13:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:19:03.213 19:13:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:03.213 19:13:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.213 19:13:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:03.473 19:13:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.473 19:13:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:19:03.473 19:13:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:03.473 19:13:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.473 19:13:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:03.733 19:13:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.733 19:13:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:19:03.733 19:13:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:03.733 19:13:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.733 19:13:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:03.994 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.994 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:19:03.994 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:03.994 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.994 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:04.255 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.255 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:19:04.255 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:04.255 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.255 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:04.826 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.826 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:19:04.826 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:04.826 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.826 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:05.086 19:13:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.086 19:13:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:19:05.086 19:13:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:05.086 19:13:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.086 19:13:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:05.346 19:13:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.346 19:13:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:19:05.346 19:13:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:05.346 19:13:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.346 19:13:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:05.605 19:13:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.605 19:13:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:19:05.605 19:13:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:05.605 19:13:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.605 19:13:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:05.865 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.865 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:19:05.865 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:05.865 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.865 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:06.435 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.435 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:19:06.435 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:06.435 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.435 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:06.695 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.695 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:19:06.695 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:06.695 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.695 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:06.955 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.955 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:06.955 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:19:06.955 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:06.955 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.955 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:07.215 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.215 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 301251 00:19:07.216 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (301251) - No such process 00:19:07.216 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 301251 00:19:07.216 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:19:07.216 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:07.216 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:19:07.216 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:07.216 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:19:07.216 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:07.216 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:07.216 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:19:07.216 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:07.216 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:07.216 rmmod nvme_rdma 00:19:07.216 rmmod nvme_fabrics 00:19:07.476 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:07.476 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:19:07.476 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:19:07.476 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 301020 ']' 00:19:07.476 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 301020 00:19:07.476 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 301020 ']' 00:19:07.476 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 301020 00:19:07.476 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:19:07.476 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.476 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 301020 00:19:07.476 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:07.476 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:07.476 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 301020' 00:19:07.476 killing process with pid 301020 00:19:07.476 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 301020 00:19:07.476 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 301020 00:19:07.737 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:07.737 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:07.737 00:19:07.737 real 0m18.871s 00:19:07.737 user 0m41.093s 00:19:07.737 sys 0m8.191s 00:19:07.737 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.737 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:07.737 ************************************ 00:19:07.737 END TEST nvmf_connect_stress 00:19:07.737 ************************************ 00:19:07.737 19:13:41 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:19:07.737 19:13:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:07.737 19:13:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.737 19:13:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:07.737 ************************************ 00:19:07.737 START TEST nvmf_fused_ordering 00:19:07.737 ************************************ 00:19:07.737 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:19:07.737 * Looking for test storage... 00:19:07.737 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:07.737 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:07.737 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:19:07.737 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:07.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.998 --rc genhtml_branch_coverage=1 00:19:07.998 --rc genhtml_function_coverage=1 00:19:07.998 --rc genhtml_legend=1 00:19:07.998 --rc geninfo_all_blocks=1 00:19:07.998 --rc geninfo_unexecuted_blocks=1 00:19:07.998 00:19:07.998 ' 00:19:07.998 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:07.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.998 --rc genhtml_branch_coverage=1 00:19:07.998 --rc genhtml_function_coverage=1 00:19:07.998 --rc genhtml_legend=1 00:19:07.998 --rc geninfo_all_blocks=1 00:19:07.998 --rc geninfo_unexecuted_blocks=1 00:19:07.998 00:19:07.999 ' 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:07.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.999 --rc genhtml_branch_coverage=1 00:19:07.999 --rc genhtml_function_coverage=1 00:19:07.999 --rc genhtml_legend=1 00:19:07.999 --rc geninfo_all_blocks=1 00:19:07.999 --rc geninfo_unexecuted_blocks=1 00:19:07.999 00:19:07.999 ' 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:07.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.999 --rc genhtml_branch_coverage=1 00:19:07.999 --rc genhtml_function_coverage=1 00:19:07.999 --rc genhtml_legend=1 00:19:07.999 --rc geninfo_all_blocks=1 00:19:07.999 --rc geninfo_unexecuted_blocks=1 00:19:07.999 00:19:07.999 ' 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:07.999 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:19:07.999 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:16.140 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:16.140 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:16.140 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.140 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:16.141 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # rdma_device_init 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:16.141 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:16.141 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:16.141 altname enp217s0f0np0 00:19:16.141 altname ens818f0np0 00:19:16.141 inet 192.168.100.8/24 scope global mlx_0_0 00:19:16.141 valid_lft forever preferred_lft forever 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:16.141 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:16.141 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:16.141 altname enp217s0f1np1 00:19:16.141 altname ens818f1np1 00:19:16.141 inet 192.168.100.9/24 scope global mlx_0_1 00:19:16.141 valid_lft forever preferred_lft forever 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:16.141 192.168.100.9' 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:16.141 192.168.100.9' 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # head -n 1 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:16.141 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:16.142 192.168.100.9' 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # tail -n +2 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # head -n 1 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=306334 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 306334 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 306334 ']' 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:16.142 [2024-12-13 19:13:49.525770] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:16.142 [2024-12-13 19:13:49.525818] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.142 [2024-12-13 19:13:49.620525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.142 [2024-12-13 19:13:49.641452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.142 [2024-12-13 19:13:49.641489] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.142 [2024-12-13 19:13:49.641499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.142 [2024-12-13 19:13:49.641507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.142 [2024-12-13 19:13:49.641513] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.142 [2024-12-13 19:13:49.642123] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:16.142 [2024-12-13 19:13:49.803535] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x217e680/0x2182b30) succeed. 00:19:16.142 [2024-12-13 19:13:49.812303] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x217fae0/0x21c41d0) succeed. 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:16.142 [2024-12-13 19:13:49.858103] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:16.142 NULL1 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.142 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:16.142 [2024-12-13 19:13:49.916029] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:16.142 [2024-12-13 19:13:49.916072] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid306495 ] 00:19:16.142 Attached to nqn.2016-06.io.spdk:cnode1 00:19:16.142 Namespace ID: 1 size: 1GB 00:19:16.142 fused_ordering(0) 00:19:16.142 fused_ordering(1) 00:19:16.142 fused_ordering(2) 00:19:16.142 fused_ordering(3) 00:19:16.142 fused_ordering(4) 00:19:16.142 fused_ordering(5) 00:19:16.142 fused_ordering(6) 00:19:16.142 fused_ordering(7) 00:19:16.142 fused_ordering(8) 00:19:16.142 fused_ordering(9) 00:19:16.142 fused_ordering(10) 00:19:16.142 fused_ordering(11) 00:19:16.142 fused_ordering(12) 00:19:16.142 fused_ordering(13) 00:19:16.142 fused_ordering(14) 00:19:16.142 fused_ordering(15) 00:19:16.142 fused_ordering(16) 00:19:16.142 fused_ordering(17) 00:19:16.142 fused_ordering(18) 00:19:16.142 fused_ordering(19) 00:19:16.142 fused_ordering(20) 00:19:16.142 fused_ordering(21) 00:19:16.142 fused_ordering(22) 00:19:16.142 fused_ordering(23) 00:19:16.142 fused_ordering(24) 00:19:16.142 fused_ordering(25) 00:19:16.142 fused_ordering(26) 00:19:16.142 fused_ordering(27) 00:19:16.142 fused_ordering(28) 00:19:16.142 fused_ordering(29) 00:19:16.142 fused_ordering(30) 00:19:16.142 fused_ordering(31) 00:19:16.142 fused_ordering(32) 00:19:16.142 fused_ordering(33) 00:19:16.142 fused_ordering(34) 00:19:16.142 fused_ordering(35) 00:19:16.142 fused_ordering(36) 00:19:16.142 fused_ordering(37) 00:19:16.142 fused_ordering(38) 00:19:16.142 fused_ordering(39) 00:19:16.142 fused_ordering(40) 00:19:16.142 fused_ordering(41) 00:19:16.142 fused_ordering(42) 00:19:16.142 fused_ordering(43) 00:19:16.142 fused_ordering(44) 00:19:16.142 fused_ordering(45) 00:19:16.142 fused_ordering(46) 00:19:16.142 fused_ordering(47) 00:19:16.142 fused_ordering(48) 00:19:16.142 fused_ordering(49) 00:19:16.142 fused_ordering(50) 00:19:16.142 fused_ordering(51) 00:19:16.142 fused_ordering(52) 00:19:16.142 fused_ordering(53) 00:19:16.142 fused_ordering(54) 00:19:16.142 fused_ordering(55) 00:19:16.142 fused_ordering(56) 00:19:16.142 fused_ordering(57) 00:19:16.142 fused_ordering(58) 00:19:16.142 fused_ordering(59) 00:19:16.142 fused_ordering(60) 00:19:16.142 fused_ordering(61) 00:19:16.142 fused_ordering(62) 00:19:16.142 fused_ordering(63) 00:19:16.142 fused_ordering(64) 00:19:16.142 fused_ordering(65) 00:19:16.142 fused_ordering(66) 00:19:16.142 fused_ordering(67) 00:19:16.142 fused_ordering(68) 00:19:16.142 fused_ordering(69) 00:19:16.142 fused_ordering(70) 00:19:16.142 fused_ordering(71) 00:19:16.142 fused_ordering(72) 00:19:16.142 fused_ordering(73) 00:19:16.142 fused_ordering(74) 00:19:16.142 fused_ordering(75) 00:19:16.142 fused_ordering(76) 00:19:16.142 fused_ordering(77) 00:19:16.142 fused_ordering(78) 00:19:16.142 fused_ordering(79) 00:19:16.142 fused_ordering(80) 00:19:16.142 fused_ordering(81) 00:19:16.142 fused_ordering(82) 00:19:16.142 fused_ordering(83) 00:19:16.142 fused_ordering(84) 00:19:16.142 fused_ordering(85) 00:19:16.142 fused_ordering(86) 00:19:16.142 fused_ordering(87) 00:19:16.142 fused_ordering(88) 00:19:16.143 fused_ordering(89) 00:19:16.143 fused_ordering(90) 00:19:16.143 fused_ordering(91) 00:19:16.143 fused_ordering(92) 00:19:16.143 fused_ordering(93) 00:19:16.143 fused_ordering(94) 00:19:16.143 fused_ordering(95) 00:19:16.143 fused_ordering(96) 00:19:16.143 fused_ordering(97) 00:19:16.143 fused_ordering(98) 00:19:16.143 fused_ordering(99) 00:19:16.143 fused_ordering(100) 00:19:16.143 fused_ordering(101) 00:19:16.143 fused_ordering(102) 00:19:16.143 fused_ordering(103) 00:19:16.143 fused_ordering(104) 00:19:16.143 fused_ordering(105) 00:19:16.143 fused_ordering(106) 00:19:16.143 fused_ordering(107) 00:19:16.143 fused_ordering(108) 00:19:16.143 fused_ordering(109) 00:19:16.143 fused_ordering(110) 00:19:16.143 fused_ordering(111) 00:19:16.143 fused_ordering(112) 00:19:16.143 fused_ordering(113) 00:19:16.143 fused_ordering(114) 00:19:16.143 fused_ordering(115) 00:19:16.143 fused_ordering(116) 00:19:16.143 fused_ordering(117) 00:19:16.143 fused_ordering(118) 00:19:16.143 fused_ordering(119) 00:19:16.143 fused_ordering(120) 00:19:16.143 fused_ordering(121) 00:19:16.143 fused_ordering(122) 00:19:16.143 fused_ordering(123) 00:19:16.143 fused_ordering(124) 00:19:16.143 fused_ordering(125) 00:19:16.143 fused_ordering(126) 00:19:16.143 fused_ordering(127) 00:19:16.143 fused_ordering(128) 00:19:16.143 fused_ordering(129) 00:19:16.143 fused_ordering(130) 00:19:16.143 fused_ordering(131) 00:19:16.143 fused_ordering(132) 00:19:16.143 fused_ordering(133) 00:19:16.143 fused_ordering(134) 00:19:16.143 fused_ordering(135) 00:19:16.143 fused_ordering(136) 00:19:16.143 fused_ordering(137) 00:19:16.143 fused_ordering(138) 00:19:16.143 fused_ordering(139) 00:19:16.143 fused_ordering(140) 00:19:16.143 fused_ordering(141) 00:19:16.143 fused_ordering(142) 00:19:16.143 fused_ordering(143) 00:19:16.143 fused_ordering(144) 00:19:16.143 fused_ordering(145) 00:19:16.143 fused_ordering(146) 00:19:16.143 fused_ordering(147) 00:19:16.143 fused_ordering(148) 00:19:16.143 fused_ordering(149) 00:19:16.143 fused_ordering(150) 00:19:16.143 fused_ordering(151) 00:19:16.143 fused_ordering(152) 00:19:16.143 fused_ordering(153) 00:19:16.143 fused_ordering(154) 00:19:16.143 fused_ordering(155) 00:19:16.143 fused_ordering(156) 00:19:16.143 fused_ordering(157) 00:19:16.143 fused_ordering(158) 00:19:16.143 fused_ordering(159) 00:19:16.143 fused_ordering(160) 00:19:16.143 fused_ordering(161) 00:19:16.143 fused_ordering(162) 00:19:16.143 fused_ordering(163) 00:19:16.143 fused_ordering(164) 00:19:16.143 fused_ordering(165) 00:19:16.143 fused_ordering(166) 00:19:16.143 fused_ordering(167) 00:19:16.143 fused_ordering(168) 00:19:16.143 fused_ordering(169) 00:19:16.143 fused_ordering(170) 00:19:16.143 fused_ordering(171) 00:19:16.143 fused_ordering(172) 00:19:16.143 fused_ordering(173) 00:19:16.143 fused_ordering(174) 00:19:16.143 fused_ordering(175) 00:19:16.143 fused_ordering(176) 00:19:16.143 fused_ordering(177) 00:19:16.143 fused_ordering(178) 00:19:16.143 fused_ordering(179) 00:19:16.143 fused_ordering(180) 00:19:16.143 fused_ordering(181) 00:19:16.143 fused_ordering(182) 00:19:16.143 fused_ordering(183) 00:19:16.143 fused_ordering(184) 00:19:16.143 fused_ordering(185) 00:19:16.143 fused_ordering(186) 00:19:16.143 fused_ordering(187) 00:19:16.143 fused_ordering(188) 00:19:16.143 fused_ordering(189) 00:19:16.143 fused_ordering(190) 00:19:16.143 fused_ordering(191) 00:19:16.143 fused_ordering(192) 00:19:16.143 fused_ordering(193) 00:19:16.143 fused_ordering(194) 00:19:16.143 fused_ordering(195) 00:19:16.143 fused_ordering(196) 00:19:16.143 fused_ordering(197) 00:19:16.143 fused_ordering(198) 00:19:16.143 fused_ordering(199) 00:19:16.143 fused_ordering(200) 00:19:16.143 fused_ordering(201) 00:19:16.143 fused_ordering(202) 00:19:16.143 fused_ordering(203) 00:19:16.143 fused_ordering(204) 00:19:16.143 fused_ordering(205) 00:19:16.143 fused_ordering(206) 00:19:16.143 fused_ordering(207) 00:19:16.143 fused_ordering(208) 00:19:16.143 fused_ordering(209) 00:19:16.143 fused_ordering(210) 00:19:16.143 fused_ordering(211) 00:19:16.143 fused_ordering(212) 00:19:16.143 fused_ordering(213) 00:19:16.143 fused_ordering(214) 00:19:16.143 fused_ordering(215) 00:19:16.143 fused_ordering(216) 00:19:16.143 fused_ordering(217) 00:19:16.143 fused_ordering(218) 00:19:16.143 fused_ordering(219) 00:19:16.143 fused_ordering(220) 00:19:16.143 fused_ordering(221) 00:19:16.143 fused_ordering(222) 00:19:16.143 fused_ordering(223) 00:19:16.143 fused_ordering(224) 00:19:16.143 fused_ordering(225) 00:19:16.143 fused_ordering(226) 00:19:16.143 fused_ordering(227) 00:19:16.143 fused_ordering(228) 00:19:16.143 fused_ordering(229) 00:19:16.143 fused_ordering(230) 00:19:16.143 fused_ordering(231) 00:19:16.143 fused_ordering(232) 00:19:16.143 fused_ordering(233) 00:19:16.143 fused_ordering(234) 00:19:16.143 fused_ordering(235) 00:19:16.143 fused_ordering(236) 00:19:16.143 fused_ordering(237) 00:19:16.143 fused_ordering(238) 00:19:16.143 fused_ordering(239) 00:19:16.143 fused_ordering(240) 00:19:16.143 fused_ordering(241) 00:19:16.143 fused_ordering(242) 00:19:16.143 fused_ordering(243) 00:19:16.143 fused_ordering(244) 00:19:16.143 fused_ordering(245) 00:19:16.143 fused_ordering(246) 00:19:16.143 fused_ordering(247) 00:19:16.143 fused_ordering(248) 00:19:16.143 fused_ordering(249) 00:19:16.143 fused_ordering(250) 00:19:16.143 fused_ordering(251) 00:19:16.143 fused_ordering(252) 00:19:16.143 fused_ordering(253) 00:19:16.143 fused_ordering(254) 00:19:16.143 fused_ordering(255) 00:19:16.143 fused_ordering(256) 00:19:16.143 fused_ordering(257) 00:19:16.143 fused_ordering(258) 00:19:16.143 fused_ordering(259) 00:19:16.143 fused_ordering(260) 00:19:16.143 fused_ordering(261) 00:19:16.143 fused_ordering(262) 00:19:16.143 fused_ordering(263) 00:19:16.143 fused_ordering(264) 00:19:16.143 fused_ordering(265) 00:19:16.143 fused_ordering(266) 00:19:16.143 fused_ordering(267) 00:19:16.143 fused_ordering(268) 00:19:16.143 fused_ordering(269) 00:19:16.143 fused_ordering(270) 00:19:16.143 fused_ordering(271) 00:19:16.143 fused_ordering(272) 00:19:16.143 fused_ordering(273) 00:19:16.143 fused_ordering(274) 00:19:16.143 fused_ordering(275) 00:19:16.143 fused_ordering(276) 00:19:16.143 fused_ordering(277) 00:19:16.143 fused_ordering(278) 00:19:16.143 fused_ordering(279) 00:19:16.143 fused_ordering(280) 00:19:16.143 fused_ordering(281) 00:19:16.143 fused_ordering(282) 00:19:16.143 fused_ordering(283) 00:19:16.143 fused_ordering(284) 00:19:16.143 fused_ordering(285) 00:19:16.143 fused_ordering(286) 00:19:16.143 fused_ordering(287) 00:19:16.143 fused_ordering(288) 00:19:16.143 fused_ordering(289) 00:19:16.143 fused_ordering(290) 00:19:16.143 fused_ordering(291) 00:19:16.143 fused_ordering(292) 00:19:16.143 fused_ordering(293) 00:19:16.143 fused_ordering(294) 00:19:16.143 fused_ordering(295) 00:19:16.143 fused_ordering(296) 00:19:16.143 fused_ordering(297) 00:19:16.143 fused_ordering(298) 00:19:16.143 fused_ordering(299) 00:19:16.143 fused_ordering(300) 00:19:16.143 fused_ordering(301) 00:19:16.143 fused_ordering(302) 00:19:16.143 fused_ordering(303) 00:19:16.143 fused_ordering(304) 00:19:16.143 fused_ordering(305) 00:19:16.143 fused_ordering(306) 00:19:16.143 fused_ordering(307) 00:19:16.143 fused_ordering(308) 00:19:16.143 fused_ordering(309) 00:19:16.143 fused_ordering(310) 00:19:16.143 fused_ordering(311) 00:19:16.143 fused_ordering(312) 00:19:16.143 fused_ordering(313) 00:19:16.143 fused_ordering(314) 00:19:16.143 fused_ordering(315) 00:19:16.143 fused_ordering(316) 00:19:16.143 fused_ordering(317) 00:19:16.143 fused_ordering(318) 00:19:16.143 fused_ordering(319) 00:19:16.143 fused_ordering(320) 00:19:16.143 fused_ordering(321) 00:19:16.143 fused_ordering(322) 00:19:16.143 fused_ordering(323) 00:19:16.143 fused_ordering(324) 00:19:16.143 fused_ordering(325) 00:19:16.143 fused_ordering(326) 00:19:16.143 fused_ordering(327) 00:19:16.143 fused_ordering(328) 00:19:16.143 fused_ordering(329) 00:19:16.143 fused_ordering(330) 00:19:16.143 fused_ordering(331) 00:19:16.143 fused_ordering(332) 00:19:16.143 fused_ordering(333) 00:19:16.143 fused_ordering(334) 00:19:16.143 fused_ordering(335) 00:19:16.143 fused_ordering(336) 00:19:16.143 fused_ordering(337) 00:19:16.143 fused_ordering(338) 00:19:16.143 fused_ordering(339) 00:19:16.143 fused_ordering(340) 00:19:16.143 fused_ordering(341) 00:19:16.143 fused_ordering(342) 00:19:16.143 fused_ordering(343) 00:19:16.143 fused_ordering(344) 00:19:16.143 fused_ordering(345) 00:19:16.143 fused_ordering(346) 00:19:16.143 fused_ordering(347) 00:19:16.143 fused_ordering(348) 00:19:16.143 fused_ordering(349) 00:19:16.143 fused_ordering(350) 00:19:16.143 fused_ordering(351) 00:19:16.143 fused_ordering(352) 00:19:16.143 fused_ordering(353) 00:19:16.143 fused_ordering(354) 00:19:16.143 fused_ordering(355) 00:19:16.143 fused_ordering(356) 00:19:16.143 fused_ordering(357) 00:19:16.143 fused_ordering(358) 00:19:16.144 fused_ordering(359) 00:19:16.144 fused_ordering(360) 00:19:16.144 fused_ordering(361) 00:19:16.144 fused_ordering(362) 00:19:16.144 fused_ordering(363) 00:19:16.144 fused_ordering(364) 00:19:16.144 fused_ordering(365) 00:19:16.144 fused_ordering(366) 00:19:16.144 fused_ordering(367) 00:19:16.144 fused_ordering(368) 00:19:16.144 fused_ordering(369) 00:19:16.144 fused_ordering(370) 00:19:16.144 fused_ordering(371) 00:19:16.144 fused_ordering(372) 00:19:16.144 fused_ordering(373) 00:19:16.144 fused_ordering(374) 00:19:16.144 fused_ordering(375) 00:19:16.144 fused_ordering(376) 00:19:16.144 fused_ordering(377) 00:19:16.144 fused_ordering(378) 00:19:16.144 fused_ordering(379) 00:19:16.144 fused_ordering(380) 00:19:16.144 fused_ordering(381) 00:19:16.144 fused_ordering(382) 00:19:16.144 fused_ordering(383) 00:19:16.144 fused_ordering(384) 00:19:16.144 fused_ordering(385) 00:19:16.144 fused_ordering(386) 00:19:16.144 fused_ordering(387) 00:19:16.144 fused_ordering(388) 00:19:16.144 fused_ordering(389) 00:19:16.144 fused_ordering(390) 00:19:16.144 fused_ordering(391) 00:19:16.144 fused_ordering(392) 00:19:16.144 fused_ordering(393) 00:19:16.144 fused_ordering(394) 00:19:16.144 fused_ordering(395) 00:19:16.144 fused_ordering(396) 00:19:16.144 fused_ordering(397) 00:19:16.144 fused_ordering(398) 00:19:16.144 fused_ordering(399) 00:19:16.144 fused_ordering(400) 00:19:16.144 fused_ordering(401) 00:19:16.144 fused_ordering(402) 00:19:16.144 fused_ordering(403) 00:19:16.144 fused_ordering(404) 00:19:16.144 fused_ordering(405) 00:19:16.144 fused_ordering(406) 00:19:16.144 fused_ordering(407) 00:19:16.144 fused_ordering(408) 00:19:16.144 fused_ordering(409) 00:19:16.144 fused_ordering(410) 00:19:16.144 fused_ordering(411) 00:19:16.144 fused_ordering(412) 00:19:16.144 fused_ordering(413) 00:19:16.144 fused_ordering(414) 00:19:16.144 fused_ordering(415) 00:19:16.144 fused_ordering(416) 00:19:16.144 fused_ordering(417) 00:19:16.144 fused_ordering(418) 00:19:16.144 fused_ordering(419) 00:19:16.144 fused_ordering(420) 00:19:16.144 fused_ordering(421) 00:19:16.144 fused_ordering(422) 00:19:16.144 fused_ordering(423) 00:19:16.144 fused_ordering(424) 00:19:16.144 fused_ordering(425) 00:19:16.144 fused_ordering(426) 00:19:16.144 fused_ordering(427) 00:19:16.144 fused_ordering(428) 00:19:16.144 fused_ordering(429) 00:19:16.144 fused_ordering(430) 00:19:16.144 fused_ordering(431) 00:19:16.144 fused_ordering(432) 00:19:16.144 fused_ordering(433) 00:19:16.144 fused_ordering(434) 00:19:16.144 fused_ordering(435) 00:19:16.144 fused_ordering(436) 00:19:16.144 fused_ordering(437) 00:19:16.144 fused_ordering(438) 00:19:16.144 fused_ordering(439) 00:19:16.144 fused_ordering(440) 00:19:16.144 fused_ordering(441) 00:19:16.144 fused_ordering(442) 00:19:16.144 fused_ordering(443) 00:19:16.144 fused_ordering(444) 00:19:16.144 fused_ordering(445) 00:19:16.144 fused_ordering(446) 00:19:16.144 fused_ordering(447) 00:19:16.144 fused_ordering(448) 00:19:16.144 fused_ordering(449) 00:19:16.144 fused_ordering(450) 00:19:16.144 fused_ordering(451) 00:19:16.144 fused_ordering(452) 00:19:16.144 fused_ordering(453) 00:19:16.144 fused_ordering(454) 00:19:16.144 fused_ordering(455) 00:19:16.144 fused_ordering(456) 00:19:16.144 fused_ordering(457) 00:19:16.144 fused_ordering(458) 00:19:16.144 fused_ordering(459) 00:19:16.144 fused_ordering(460) 00:19:16.144 fused_ordering(461) 00:19:16.144 fused_ordering(462) 00:19:16.144 fused_ordering(463) 00:19:16.144 fused_ordering(464) 00:19:16.144 fused_ordering(465) 00:19:16.144 fused_ordering(466) 00:19:16.144 fused_ordering(467) 00:19:16.144 fused_ordering(468) 00:19:16.144 fused_ordering(469) 00:19:16.144 fused_ordering(470) 00:19:16.144 fused_ordering(471) 00:19:16.144 fused_ordering(472) 00:19:16.144 fused_ordering(473) 00:19:16.144 fused_ordering(474) 00:19:16.144 fused_ordering(475) 00:19:16.144 fused_ordering(476) 00:19:16.144 fused_ordering(477) 00:19:16.144 fused_ordering(478) 00:19:16.144 fused_ordering(479) 00:19:16.144 fused_ordering(480) 00:19:16.144 fused_ordering(481) 00:19:16.144 fused_ordering(482) 00:19:16.144 fused_ordering(483) 00:19:16.144 fused_ordering(484) 00:19:16.144 fused_ordering(485) 00:19:16.144 fused_ordering(486) 00:19:16.144 fused_ordering(487) 00:19:16.144 fused_ordering(488) 00:19:16.144 fused_ordering(489) 00:19:16.144 fused_ordering(490) 00:19:16.144 fused_ordering(491) 00:19:16.144 fused_ordering(492) 00:19:16.144 fused_ordering(493) 00:19:16.144 fused_ordering(494) 00:19:16.144 fused_ordering(495) 00:19:16.144 fused_ordering(496) 00:19:16.144 fused_ordering(497) 00:19:16.144 fused_ordering(498) 00:19:16.144 fused_ordering(499) 00:19:16.144 fused_ordering(500) 00:19:16.144 fused_ordering(501) 00:19:16.144 fused_ordering(502) 00:19:16.144 fused_ordering(503) 00:19:16.144 fused_ordering(504) 00:19:16.144 fused_ordering(505) 00:19:16.144 fused_ordering(506) 00:19:16.144 fused_ordering(507) 00:19:16.144 fused_ordering(508) 00:19:16.144 fused_ordering(509) 00:19:16.144 fused_ordering(510) 00:19:16.144 fused_ordering(511) 00:19:16.144 fused_ordering(512) 00:19:16.144 fused_ordering(513) 00:19:16.144 fused_ordering(514) 00:19:16.144 fused_ordering(515) 00:19:16.144 fused_ordering(516) 00:19:16.144 fused_ordering(517) 00:19:16.144 fused_ordering(518) 00:19:16.144 fused_ordering(519) 00:19:16.144 fused_ordering(520) 00:19:16.144 fused_ordering(521) 00:19:16.144 fused_ordering(522) 00:19:16.144 fused_ordering(523) 00:19:16.144 fused_ordering(524) 00:19:16.144 fused_ordering(525) 00:19:16.144 fused_ordering(526) 00:19:16.144 fused_ordering(527) 00:19:16.144 fused_ordering(528) 00:19:16.144 fused_ordering(529) 00:19:16.144 fused_ordering(530) 00:19:16.144 fused_ordering(531) 00:19:16.144 fused_ordering(532) 00:19:16.144 fused_ordering(533) 00:19:16.144 fused_ordering(534) 00:19:16.144 fused_ordering(535) 00:19:16.144 fused_ordering(536) 00:19:16.144 fused_ordering(537) 00:19:16.144 fused_ordering(538) 00:19:16.144 fused_ordering(539) 00:19:16.144 fused_ordering(540) 00:19:16.144 fused_ordering(541) 00:19:16.144 fused_ordering(542) 00:19:16.144 fused_ordering(543) 00:19:16.144 fused_ordering(544) 00:19:16.144 fused_ordering(545) 00:19:16.144 fused_ordering(546) 00:19:16.144 fused_ordering(547) 00:19:16.144 fused_ordering(548) 00:19:16.144 fused_ordering(549) 00:19:16.144 fused_ordering(550) 00:19:16.144 fused_ordering(551) 00:19:16.144 fused_ordering(552) 00:19:16.144 fused_ordering(553) 00:19:16.144 fused_ordering(554) 00:19:16.144 fused_ordering(555) 00:19:16.144 fused_ordering(556) 00:19:16.144 fused_ordering(557) 00:19:16.144 fused_ordering(558) 00:19:16.144 fused_ordering(559) 00:19:16.144 fused_ordering(560) 00:19:16.144 fused_ordering(561) 00:19:16.144 fused_ordering(562) 00:19:16.144 fused_ordering(563) 00:19:16.144 fused_ordering(564) 00:19:16.144 fused_ordering(565) 00:19:16.144 fused_ordering(566) 00:19:16.144 fused_ordering(567) 00:19:16.144 fused_ordering(568) 00:19:16.144 fused_ordering(569) 00:19:16.144 fused_ordering(570) 00:19:16.144 fused_ordering(571) 00:19:16.144 fused_ordering(572) 00:19:16.144 fused_ordering(573) 00:19:16.144 fused_ordering(574) 00:19:16.144 fused_ordering(575) 00:19:16.144 fused_ordering(576) 00:19:16.144 fused_ordering(577) 00:19:16.144 fused_ordering(578) 00:19:16.144 fused_ordering(579) 00:19:16.144 fused_ordering(580) 00:19:16.144 fused_ordering(581) 00:19:16.144 fused_ordering(582) 00:19:16.144 fused_ordering(583) 00:19:16.144 fused_ordering(584) 00:19:16.144 fused_ordering(585) 00:19:16.144 fused_ordering(586) 00:19:16.144 fused_ordering(587) 00:19:16.144 fused_ordering(588) 00:19:16.144 fused_ordering(589) 00:19:16.144 fused_ordering(590) 00:19:16.144 fused_ordering(591) 00:19:16.144 fused_ordering(592) 00:19:16.144 fused_ordering(593) 00:19:16.144 fused_ordering(594) 00:19:16.144 fused_ordering(595) 00:19:16.144 fused_ordering(596) 00:19:16.144 fused_ordering(597) 00:19:16.144 fused_ordering(598) 00:19:16.144 fused_ordering(599) 00:19:16.144 fused_ordering(600) 00:19:16.144 fused_ordering(601) 00:19:16.144 fused_ordering(602) 00:19:16.144 fused_ordering(603) 00:19:16.144 fused_ordering(604) 00:19:16.144 fused_ordering(605) 00:19:16.144 fused_ordering(606) 00:19:16.144 fused_ordering(607) 00:19:16.144 fused_ordering(608) 00:19:16.144 fused_ordering(609) 00:19:16.144 fused_ordering(610) 00:19:16.144 fused_ordering(611) 00:19:16.144 fused_ordering(612) 00:19:16.144 fused_ordering(613) 00:19:16.144 fused_ordering(614) 00:19:16.144 fused_ordering(615) 00:19:16.144 fused_ordering(616) 00:19:16.144 fused_ordering(617) 00:19:16.144 fused_ordering(618) 00:19:16.144 fused_ordering(619) 00:19:16.144 fused_ordering(620) 00:19:16.144 fused_ordering(621) 00:19:16.144 fused_ordering(622) 00:19:16.144 fused_ordering(623) 00:19:16.144 fused_ordering(624) 00:19:16.144 fused_ordering(625) 00:19:16.144 fused_ordering(626) 00:19:16.144 fused_ordering(627) 00:19:16.144 fused_ordering(628) 00:19:16.144 fused_ordering(629) 00:19:16.144 fused_ordering(630) 00:19:16.144 fused_ordering(631) 00:19:16.144 fused_ordering(632) 00:19:16.144 fused_ordering(633) 00:19:16.144 fused_ordering(634) 00:19:16.144 fused_ordering(635) 00:19:16.144 fused_ordering(636) 00:19:16.144 fused_ordering(637) 00:19:16.144 fused_ordering(638) 00:19:16.144 fused_ordering(639) 00:19:16.144 fused_ordering(640) 00:19:16.144 fused_ordering(641) 00:19:16.144 fused_ordering(642) 00:19:16.144 fused_ordering(643) 00:19:16.144 fused_ordering(644) 00:19:16.144 fused_ordering(645) 00:19:16.144 fused_ordering(646) 00:19:16.144 fused_ordering(647) 00:19:16.144 fused_ordering(648) 00:19:16.144 fused_ordering(649) 00:19:16.145 fused_ordering(650) 00:19:16.145 fused_ordering(651) 00:19:16.145 fused_ordering(652) 00:19:16.145 fused_ordering(653) 00:19:16.145 fused_ordering(654) 00:19:16.145 fused_ordering(655) 00:19:16.145 fused_ordering(656) 00:19:16.145 fused_ordering(657) 00:19:16.145 fused_ordering(658) 00:19:16.145 fused_ordering(659) 00:19:16.145 fused_ordering(660) 00:19:16.145 fused_ordering(661) 00:19:16.145 fused_ordering(662) 00:19:16.145 fused_ordering(663) 00:19:16.145 fused_ordering(664) 00:19:16.145 fused_ordering(665) 00:19:16.145 fused_ordering(666) 00:19:16.145 fused_ordering(667) 00:19:16.145 fused_ordering(668) 00:19:16.145 fused_ordering(669) 00:19:16.145 fused_ordering(670) 00:19:16.145 fused_ordering(671) 00:19:16.145 fused_ordering(672) 00:19:16.145 fused_ordering(673) 00:19:16.145 fused_ordering(674) 00:19:16.145 fused_ordering(675) 00:19:16.145 fused_ordering(676) 00:19:16.145 fused_ordering(677) 00:19:16.145 fused_ordering(678) 00:19:16.145 fused_ordering(679) 00:19:16.145 fused_ordering(680) 00:19:16.145 fused_ordering(681) 00:19:16.145 fused_ordering(682) 00:19:16.145 fused_ordering(683) 00:19:16.145 fused_ordering(684) 00:19:16.145 fused_ordering(685) 00:19:16.145 fused_ordering(686) 00:19:16.145 fused_ordering(687) 00:19:16.145 fused_ordering(688) 00:19:16.145 fused_ordering(689) 00:19:16.145 fused_ordering(690) 00:19:16.145 fused_ordering(691) 00:19:16.145 fused_ordering(692) 00:19:16.145 fused_ordering(693) 00:19:16.145 fused_ordering(694) 00:19:16.145 fused_ordering(695) 00:19:16.145 fused_ordering(696) 00:19:16.145 fused_ordering(697) 00:19:16.145 fused_ordering(698) 00:19:16.145 fused_ordering(699) 00:19:16.145 fused_ordering(700) 00:19:16.145 fused_ordering(701) 00:19:16.145 fused_ordering(702) 00:19:16.145 fused_ordering(703) 00:19:16.145 fused_ordering(704) 00:19:16.145 fused_ordering(705) 00:19:16.145 fused_ordering(706) 00:19:16.145 fused_ordering(707) 00:19:16.145 fused_ordering(708) 00:19:16.145 fused_ordering(709) 00:19:16.145 fused_ordering(710) 00:19:16.145 fused_ordering(711) 00:19:16.145 fused_ordering(712) 00:19:16.145 fused_ordering(713) 00:19:16.145 fused_ordering(714) 00:19:16.145 fused_ordering(715) 00:19:16.145 fused_ordering(716) 00:19:16.145 fused_ordering(717) 00:19:16.145 fused_ordering(718) 00:19:16.145 fused_ordering(719) 00:19:16.145 fused_ordering(720) 00:19:16.145 fused_ordering(721) 00:19:16.145 fused_ordering(722) 00:19:16.145 fused_ordering(723) 00:19:16.145 fused_ordering(724) 00:19:16.145 fused_ordering(725) 00:19:16.145 fused_ordering(726) 00:19:16.145 fused_ordering(727) 00:19:16.145 fused_ordering(728) 00:19:16.145 fused_ordering(729) 00:19:16.145 fused_ordering(730) 00:19:16.145 fused_ordering(731) 00:19:16.145 fused_ordering(732) 00:19:16.145 fused_ordering(733) 00:19:16.145 fused_ordering(734) 00:19:16.145 fused_ordering(735) 00:19:16.145 fused_ordering(736) 00:19:16.145 fused_ordering(737) 00:19:16.145 fused_ordering(738) 00:19:16.145 fused_ordering(739) 00:19:16.145 fused_ordering(740) 00:19:16.145 fused_ordering(741) 00:19:16.145 fused_ordering(742) 00:19:16.145 fused_ordering(743) 00:19:16.145 fused_ordering(744) 00:19:16.145 fused_ordering(745) 00:19:16.145 fused_ordering(746) 00:19:16.145 fused_ordering(747) 00:19:16.145 fused_ordering(748) 00:19:16.145 fused_ordering(749) 00:19:16.145 fused_ordering(750) 00:19:16.145 fused_ordering(751) 00:19:16.145 fused_ordering(752) 00:19:16.145 fused_ordering(753) 00:19:16.145 fused_ordering(754) 00:19:16.145 fused_ordering(755) 00:19:16.145 fused_ordering(756) 00:19:16.145 fused_ordering(757) 00:19:16.145 fused_ordering(758) 00:19:16.145 fused_ordering(759) 00:19:16.145 fused_ordering(760) 00:19:16.145 fused_ordering(761) 00:19:16.145 fused_ordering(762) 00:19:16.145 fused_ordering(763) 00:19:16.145 fused_ordering(764) 00:19:16.145 fused_ordering(765) 00:19:16.145 fused_ordering(766) 00:19:16.145 fused_ordering(767) 00:19:16.145 fused_ordering(768) 00:19:16.145 fused_ordering(769) 00:19:16.145 fused_ordering(770) 00:19:16.145 fused_ordering(771) 00:19:16.145 fused_ordering(772) 00:19:16.145 fused_ordering(773) 00:19:16.145 fused_ordering(774) 00:19:16.145 fused_ordering(775) 00:19:16.145 fused_ordering(776) 00:19:16.145 fused_ordering(777) 00:19:16.145 fused_ordering(778) 00:19:16.145 fused_ordering(779) 00:19:16.145 fused_ordering(780) 00:19:16.145 fused_ordering(781) 00:19:16.145 fused_ordering(782) 00:19:16.145 fused_ordering(783) 00:19:16.145 fused_ordering(784) 00:19:16.145 fused_ordering(785) 00:19:16.145 fused_ordering(786) 00:19:16.145 fused_ordering(787) 00:19:16.145 fused_ordering(788) 00:19:16.145 fused_ordering(789) 00:19:16.145 fused_ordering(790) 00:19:16.145 fused_ordering(791) 00:19:16.145 fused_ordering(792) 00:19:16.145 fused_ordering(793) 00:19:16.145 fused_ordering(794) 00:19:16.145 fused_ordering(795) 00:19:16.145 fused_ordering(796) 00:19:16.145 fused_ordering(797) 00:19:16.145 fused_ordering(798) 00:19:16.145 fused_ordering(799) 00:19:16.145 fused_ordering(800) 00:19:16.145 fused_ordering(801) 00:19:16.145 fused_ordering(802) 00:19:16.145 fused_ordering(803) 00:19:16.145 fused_ordering(804) 00:19:16.145 fused_ordering(805) 00:19:16.145 fused_ordering(806) 00:19:16.145 fused_ordering(807) 00:19:16.145 fused_ordering(808) 00:19:16.145 fused_ordering(809) 00:19:16.145 fused_ordering(810) 00:19:16.145 fused_ordering(811) 00:19:16.145 fused_ordering(812) 00:19:16.145 fused_ordering(813) 00:19:16.145 fused_ordering(814) 00:19:16.145 fused_ordering(815) 00:19:16.145 fused_ordering(816) 00:19:16.145 fused_ordering(817) 00:19:16.145 fused_ordering(818) 00:19:16.145 fused_ordering(819) 00:19:16.145 fused_ordering(820) 00:19:16.406 fused_ordering(821) 00:19:16.406 fused_ordering(822) 00:19:16.406 fused_ordering(823) 00:19:16.406 fused_ordering(824) 00:19:16.406 fused_ordering(825) 00:19:16.406 fused_ordering(826) 00:19:16.406 fused_ordering(827) 00:19:16.406 fused_ordering(828) 00:19:16.406 fused_ordering(829) 00:19:16.406 fused_ordering(830) 00:19:16.406 fused_ordering(831) 00:19:16.406 fused_ordering(832) 00:19:16.406 fused_ordering(833) 00:19:16.406 fused_ordering(834) 00:19:16.406 fused_ordering(835) 00:19:16.406 fused_ordering(836) 00:19:16.406 fused_ordering(837) 00:19:16.406 fused_ordering(838) 00:19:16.406 fused_ordering(839) 00:19:16.406 fused_ordering(840) 00:19:16.406 fused_ordering(841) 00:19:16.406 fused_ordering(842) 00:19:16.406 fused_ordering(843) 00:19:16.406 fused_ordering(844) 00:19:16.406 fused_ordering(845) 00:19:16.406 fused_ordering(846) 00:19:16.406 fused_ordering(847) 00:19:16.406 fused_ordering(848) 00:19:16.406 fused_ordering(849) 00:19:16.406 fused_ordering(850) 00:19:16.406 fused_ordering(851) 00:19:16.406 fused_ordering(852) 00:19:16.406 fused_ordering(853) 00:19:16.406 fused_ordering(854) 00:19:16.406 fused_ordering(855) 00:19:16.406 fused_ordering(856) 00:19:16.406 fused_ordering(857) 00:19:16.406 fused_ordering(858) 00:19:16.406 fused_ordering(859) 00:19:16.406 fused_ordering(860) 00:19:16.406 fused_ordering(861) 00:19:16.406 fused_ordering(862) 00:19:16.406 fused_ordering(863) 00:19:16.406 fused_ordering(864) 00:19:16.406 fused_ordering(865) 00:19:16.406 fused_ordering(866) 00:19:16.406 fused_ordering(867) 00:19:16.406 fused_ordering(868) 00:19:16.406 fused_ordering(869) 00:19:16.406 fused_ordering(870) 00:19:16.406 fused_ordering(871) 00:19:16.406 fused_ordering(872) 00:19:16.406 fused_ordering(873) 00:19:16.406 fused_ordering(874) 00:19:16.406 fused_ordering(875) 00:19:16.406 fused_ordering(876) 00:19:16.406 fused_ordering(877) 00:19:16.406 fused_ordering(878) 00:19:16.406 fused_ordering(879) 00:19:16.406 fused_ordering(880) 00:19:16.406 fused_ordering(881) 00:19:16.406 fused_ordering(882) 00:19:16.406 fused_ordering(883) 00:19:16.406 fused_ordering(884) 00:19:16.406 fused_ordering(885) 00:19:16.406 fused_ordering(886) 00:19:16.406 fused_ordering(887) 00:19:16.406 fused_ordering(888) 00:19:16.406 fused_ordering(889) 00:19:16.406 fused_ordering(890) 00:19:16.406 fused_ordering(891) 00:19:16.406 fused_ordering(892) 00:19:16.406 fused_ordering(893) 00:19:16.406 fused_ordering(894) 00:19:16.406 fused_ordering(895) 00:19:16.406 fused_ordering(896) 00:19:16.406 fused_ordering(897) 00:19:16.406 fused_ordering(898) 00:19:16.406 fused_ordering(899) 00:19:16.406 fused_ordering(900) 00:19:16.406 fused_ordering(901) 00:19:16.406 fused_ordering(902) 00:19:16.406 fused_ordering(903) 00:19:16.406 fused_ordering(904) 00:19:16.406 fused_ordering(905) 00:19:16.406 fused_ordering(906) 00:19:16.406 fused_ordering(907) 00:19:16.406 fused_ordering(908) 00:19:16.406 fused_ordering(909) 00:19:16.406 fused_ordering(910) 00:19:16.406 fused_ordering(911) 00:19:16.406 fused_ordering(912) 00:19:16.406 fused_ordering(913) 00:19:16.406 fused_ordering(914) 00:19:16.406 fused_ordering(915) 00:19:16.406 fused_ordering(916) 00:19:16.406 fused_ordering(917) 00:19:16.406 fused_ordering(918) 00:19:16.406 fused_ordering(919) 00:19:16.406 fused_ordering(920) 00:19:16.406 fused_ordering(921) 00:19:16.406 fused_ordering(922) 00:19:16.406 fused_ordering(923) 00:19:16.406 fused_ordering(924) 00:19:16.406 fused_ordering(925) 00:19:16.406 fused_ordering(926) 00:19:16.406 fused_ordering(927) 00:19:16.406 fused_ordering(928) 00:19:16.406 fused_ordering(929) 00:19:16.406 fused_ordering(930) 00:19:16.406 fused_ordering(931) 00:19:16.406 fused_ordering(932) 00:19:16.406 fused_ordering(933) 00:19:16.406 fused_ordering(934) 00:19:16.406 fused_ordering(935) 00:19:16.406 fused_ordering(936) 00:19:16.406 fused_ordering(937) 00:19:16.406 fused_ordering(938) 00:19:16.406 fused_ordering(939) 00:19:16.406 fused_ordering(940) 00:19:16.406 fused_ordering(941) 00:19:16.406 fused_ordering(942) 00:19:16.406 fused_ordering(943) 00:19:16.406 fused_ordering(944) 00:19:16.406 fused_ordering(945) 00:19:16.406 fused_ordering(946) 00:19:16.406 fused_ordering(947) 00:19:16.406 fused_ordering(948) 00:19:16.406 fused_ordering(949) 00:19:16.406 fused_ordering(950) 00:19:16.406 fused_ordering(951) 00:19:16.406 fused_ordering(952) 00:19:16.406 fused_ordering(953) 00:19:16.406 fused_ordering(954) 00:19:16.406 fused_ordering(955) 00:19:16.406 fused_ordering(956) 00:19:16.406 fused_ordering(957) 00:19:16.406 fused_ordering(958) 00:19:16.406 fused_ordering(959) 00:19:16.406 fused_ordering(960) 00:19:16.406 fused_ordering(961) 00:19:16.406 fused_ordering(962) 00:19:16.406 fused_ordering(963) 00:19:16.406 fused_ordering(964) 00:19:16.406 fused_ordering(965) 00:19:16.406 fused_ordering(966) 00:19:16.406 fused_ordering(967) 00:19:16.406 fused_ordering(968) 00:19:16.406 fused_ordering(969) 00:19:16.406 fused_ordering(970) 00:19:16.406 fused_ordering(971) 00:19:16.407 fused_ordering(972) 00:19:16.407 fused_ordering(973) 00:19:16.407 fused_ordering(974) 00:19:16.407 fused_ordering(975) 00:19:16.407 fused_ordering(976) 00:19:16.407 fused_ordering(977) 00:19:16.407 fused_ordering(978) 00:19:16.407 fused_ordering(979) 00:19:16.407 fused_ordering(980) 00:19:16.407 fused_ordering(981) 00:19:16.407 fused_ordering(982) 00:19:16.407 fused_ordering(983) 00:19:16.407 fused_ordering(984) 00:19:16.407 fused_ordering(985) 00:19:16.407 fused_ordering(986) 00:19:16.407 fused_ordering(987) 00:19:16.407 fused_ordering(988) 00:19:16.407 fused_ordering(989) 00:19:16.407 fused_ordering(990) 00:19:16.407 fused_ordering(991) 00:19:16.407 fused_ordering(992) 00:19:16.407 fused_ordering(993) 00:19:16.407 fused_ordering(994) 00:19:16.407 fused_ordering(995) 00:19:16.407 fused_ordering(996) 00:19:16.407 fused_ordering(997) 00:19:16.407 fused_ordering(998) 00:19:16.407 fused_ordering(999) 00:19:16.407 fused_ordering(1000) 00:19:16.407 fused_ordering(1001) 00:19:16.407 fused_ordering(1002) 00:19:16.407 fused_ordering(1003) 00:19:16.407 fused_ordering(1004) 00:19:16.407 fused_ordering(1005) 00:19:16.407 fused_ordering(1006) 00:19:16.407 fused_ordering(1007) 00:19:16.407 fused_ordering(1008) 00:19:16.407 fused_ordering(1009) 00:19:16.407 fused_ordering(1010) 00:19:16.407 fused_ordering(1011) 00:19:16.407 fused_ordering(1012) 00:19:16.407 fused_ordering(1013) 00:19:16.407 fused_ordering(1014) 00:19:16.407 fused_ordering(1015) 00:19:16.407 fused_ordering(1016) 00:19:16.407 fused_ordering(1017) 00:19:16.407 fused_ordering(1018) 00:19:16.407 fused_ordering(1019) 00:19:16.407 fused_ordering(1020) 00:19:16.407 fused_ordering(1021) 00:19:16.407 fused_ordering(1022) 00:19:16.407 fused_ordering(1023) 00:19:16.407 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:19:16.407 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:19:16.407 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:16.407 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:19:16.407 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:16.407 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:16.407 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:19:16.407 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:16.407 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:16.407 rmmod nvme_rdma 00:19:16.407 rmmod nvme_fabrics 00:19:16.407 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:16.407 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:19:16.407 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:19:16.407 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 306334 ']' 00:19:16.407 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 306334 00:19:16.407 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 306334 ']' 00:19:16.407 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 306334 00:19:16.407 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:19:16.407 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.407 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 306334 00:19:16.407 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:16.407 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:16.407 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 306334' 00:19:16.407 killing process with pid 306334 00:19:16.407 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 306334 00:19:16.407 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 306334 00:19:16.667 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:16.667 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:16.667 00:19:16.667 real 0m8.948s 00:19:16.667 user 0m4.178s 00:19:16.667 sys 0m6.030s 00:19:16.667 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:16.667 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:16.667 ************************************ 00:19:16.667 END TEST nvmf_fused_ordering 00:19:16.667 ************************************ 00:19:16.667 19:13:50 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:19:16.667 19:13:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:16.667 19:13:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:16.667 19:13:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:16.667 ************************************ 00:19:16.667 START TEST nvmf_ns_masking 00:19:16.667 ************************************ 00:19:16.667 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:19:16.928 * Looking for test storage... 00:19:16.928 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:16.928 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:16.928 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:19:16.928 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:16.928 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:16.928 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:16.928 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:16.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.929 --rc genhtml_branch_coverage=1 00:19:16.929 --rc genhtml_function_coverage=1 00:19:16.929 --rc genhtml_legend=1 00:19:16.929 --rc geninfo_all_blocks=1 00:19:16.929 --rc geninfo_unexecuted_blocks=1 00:19:16.929 00:19:16.929 ' 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:16.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.929 --rc genhtml_branch_coverage=1 00:19:16.929 --rc genhtml_function_coverage=1 00:19:16.929 --rc genhtml_legend=1 00:19:16.929 --rc geninfo_all_blocks=1 00:19:16.929 --rc geninfo_unexecuted_blocks=1 00:19:16.929 00:19:16.929 ' 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:16.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.929 --rc genhtml_branch_coverage=1 00:19:16.929 --rc genhtml_function_coverage=1 00:19:16.929 --rc genhtml_legend=1 00:19:16.929 --rc geninfo_all_blocks=1 00:19:16.929 --rc geninfo_unexecuted_blocks=1 00:19:16.929 00:19:16.929 ' 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:16.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.929 --rc genhtml_branch_coverage=1 00:19:16.929 --rc genhtml_function_coverage=1 00:19:16.929 --rc genhtml_legend=1 00:19:16.929 --rc geninfo_all_blocks=1 00:19:16.929 --rc geninfo_unexecuted_blocks=1 00:19:16.929 00:19:16.929 ' 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:16.929 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:19:16.929 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:19:16.930 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:19:16.930 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=c1bdb824-0bc0-4480-a428-7c7c8ed050c1 00:19:16.930 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:19:16.930 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=d01e64ef-6ae9-45d5-a496-911f4c4fe43e 00:19:16.930 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:19:16.930 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:19:16.930 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:19:16.930 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:19:16.930 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=b1b284ab-8c9f-4c16-8168-3960ab281def 00:19:16.930 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:19:16.930 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:16.930 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.930 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:16.930 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:16.930 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:16.930 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.930 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:16.930 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.930 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:16.930 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:16.930 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:19:16.930 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:25.065 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:25.065 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:25.065 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:25.065 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # rdma_device_init 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:25.065 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:25.066 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:25.066 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:25.066 altname enp217s0f0np0 00:19:25.066 altname ens818f0np0 00:19:25.066 inet 192.168.100.8/24 scope global mlx_0_0 00:19:25.066 valid_lft forever preferred_lft forever 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:25.066 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:25.066 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:25.066 altname enp217s0f1np1 00:19:25.066 altname ens818f1np1 00:19:25.066 inet 192.168.100.9/24 scope global mlx_0_1 00:19:25.066 valid_lft forever preferred_lft forever 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:25.066 192.168.100.9' 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:25.066 192.168.100.9' 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # head -n 1 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:25.066 192.168.100.9' 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # tail -n +2 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # head -n 1 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=310054 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 310054 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 310054 ']' 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:25.066 [2024-12-13 19:13:58.472060] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:25.066 [2024-12-13 19:13:58.472109] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.066 [2024-12-13 19:13:58.563978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.066 [2024-12-13 19:13:58.585093] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.066 [2024-12-13 19:13:58.585130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.066 [2024-12-13 19:13:58.585140] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.066 [2024-12-13 19:13:58.585149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.066 [2024-12-13 19:13:58.585156] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.066 [2024-12-13 19:13:58.585748] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:25.066 [2024-12-13 19:13:58.910576] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x123e380/0x1242830) succeed. 00:19:25.066 [2024-12-13 19:13:58.919469] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x123f7e0/0x1283ed0) succeed. 00:19:25.066 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:19:25.067 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:19:25.067 19:13:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:25.067 Malloc1 00:19:25.067 19:13:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:25.067 Malloc2 00:19:25.067 19:13:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:25.327 19:13:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:19:25.587 19:13:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:25.847 [2024-12-13 19:13:59.986688] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:25.847 19:14:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:19:25.847 19:14:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b1b284ab-8c9f-4c16-8168-3960ab281def -a 192.168.100.8 -s 4420 -i 4 00:19:26.107 19:14:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:19:26.107 19:14:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:26.107 19:14:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:26.107 19:14:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:26.107 19:14:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:28.019 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:28.019 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:28.019 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:28.019 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:28.019 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:28.019 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:28.019 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:28.019 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:28.279 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:28.279 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:28.279 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:19:28.279 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:28.279 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:28.279 [ 0]:0x1 00:19:28.279 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:28.279 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:28.279 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=10bdb2ae453f43baaad09d30a8244412 00:19:28.279 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 10bdb2ae453f43baaad09d30a8244412 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:28.279 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:19:28.539 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:19:28.539 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:28.539 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:28.539 [ 0]:0x1 00:19:28.539 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:28.539 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:28.539 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=10bdb2ae453f43baaad09d30a8244412 00:19:28.539 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 10bdb2ae453f43baaad09d30a8244412 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:28.539 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:19:28.539 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:28.539 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:28.539 [ 1]:0x2 00:19:28.539 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:28.539 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:28.539 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c9c35c6b093f45b4af5f9ccfaedd5ac7 00:19:28.539 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c9c35c6b093f45b4af5f9ccfaedd5ac7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:28.539 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:19:28.540 19:14:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:28.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:28.800 19:14:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:29.060 19:14:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:19:29.320 19:14:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:19:29.320 19:14:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b1b284ab-8c9f-4c16-8168-3960ab281def -a 192.168.100.8 -s 4420 -i 4 00:19:29.581 19:14:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:19:29.581 19:14:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:29.581 19:14:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:29.581 19:14:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:19:29.581 19:14:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:19:29.581 19:14:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:31.495 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:31.495 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:31.495 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:31.495 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:31.495 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:31.495 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:31.755 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:31.755 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:31.755 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:31.755 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:31.755 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:19:31.755 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:31.755 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:31.755 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:31.755 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.755 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:31.755 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.755 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:31.755 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:31.755 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:31.755 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:31.755 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:31.756 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:31.756 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:31.756 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:31.756 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:31.756 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:31.756 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:31.756 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:19:31.756 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:31.756 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:31.756 [ 0]:0x2 00:19:31.756 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:31.756 19:14:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:31.756 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c9c35c6b093f45b4af5f9ccfaedd5ac7 00:19:31.756 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c9c35c6b093f45b4af5f9ccfaedd5ac7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:31.756 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:32.016 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:19:32.016 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:32.016 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:32.016 [ 0]:0x1 00:19:32.016 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:32.016 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:32.016 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=10bdb2ae453f43baaad09d30a8244412 00:19:32.016 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 10bdb2ae453f43baaad09d30a8244412 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:32.016 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:19:32.016 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:32.016 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:32.016 [ 1]:0x2 00:19:32.016 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:32.016 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:32.016 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c9c35c6b093f45b4af5f9ccfaedd5ac7 00:19:32.016 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c9c35c6b093f45b4af5f9ccfaedd5ac7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:32.016 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:32.277 [ 0]:0x2 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c9c35c6b093f45b4af5f9ccfaedd5ac7 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c9c35c6b093f45b4af5f9ccfaedd5ac7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:19:32.277 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:32.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:32.847 19:14:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:32.847 19:14:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:19:32.847 19:14:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b1b284ab-8c9f-4c16-8168-3960ab281def -a 192.168.100.8 -s 4420 -i 4 00:19:33.106 19:14:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:33.106 19:14:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:33.106 19:14:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:33.106 19:14:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:33.106 19:14:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:33.106 19:14:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:35.651 [ 0]:0x1 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=10bdb2ae453f43baaad09d30a8244412 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 10bdb2ae453f43baaad09d30a8244412 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:35.651 [ 1]:0x2 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c9c35c6b093f45b4af5f9ccfaedd5ac7 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c9c35c6b093f45b4af5f9ccfaedd5ac7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:35.651 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:35.652 [ 0]:0x2 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c9c35c6b093f45b4af5f9ccfaedd5ac7 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c9c35c6b093f45b4af5f9ccfaedd5ac7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:19:35.652 19:14:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:35.912 [2024-12-13 19:14:10.130365] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:19:35.912 request: 00:19:35.912 { 00:19:35.912 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.912 "nsid": 2, 00:19:35.912 "host": "nqn.2016-06.io.spdk:host1", 00:19:35.912 "method": "nvmf_ns_remove_host", 00:19:35.912 "req_id": 1 00:19:35.912 } 00:19:35.912 Got JSON-RPC error response 00:19:35.912 response: 00:19:35.912 { 00:19:35.912 "code": -32602, 00:19:35.912 "message": "Invalid parameters" 00:19:35.912 } 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:35.913 [ 0]:0x2 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c9c35c6b093f45b4af5f9ccfaedd5ac7 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c9c35c6b093f45b4af5f9ccfaedd5ac7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:19:35.913 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:36.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:36.491 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=312289 00:19:36.491 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:19:36.491 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.491 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 312289 /var/tmp/host.sock 00:19:36.491 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 312289 ']' 00:19:36.491 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:36.491 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:36.491 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:36.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:36.491 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:36.491 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:36.491 [2024-12-13 19:14:10.638922] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:36.491 [2024-12-13 19:14:10.638979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid312289 ] 00:19:36.491 [2024-12-13 19:14:10.733457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.491 [2024-12-13 19:14:10.755751] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.752 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.752 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:19:36.752 19:14:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:37.012 19:14:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:37.012 19:14:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid c1bdb824-0bc0-4480-a428-7c7c8ed050c1 00:19:37.012 19:14:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:37.012 19:14:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C1BDB8240BC04480A4287C7C8ED050C1 -i 00:19:37.273 19:14:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid d01e64ef-6ae9-45d5-a496-911f4c4fe43e 00:19:37.273 19:14:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:37.273 19:14:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g D01E64EF6AE945D5A496911F4C4FE43E -i 00:19:37.533 19:14:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:37.794 19:14:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:19:37.794 19:14:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:37.794 19:14:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:38.054 nvme0n1 00:19:38.314 19:14:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:38.314 19:14:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:38.314 nvme1n2 00:19:38.314 19:14:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:19:38.314 19:14:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:19:38.314 19:14:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:19:38.314 19:14:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:38.314 19:14:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:19:38.575 19:14:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:19:38.575 19:14:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:19:38.575 19:14:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:19:38.575 19:14:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:19:38.835 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ c1bdb824-0bc0-4480-a428-7c7c8ed050c1 == \c\1\b\d\b\8\2\4\-\0\b\c\0\-\4\4\8\0\-\a\4\2\8\-\7\c\7\c\8\e\d\0\5\0\c\1 ]] 00:19:38.835 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:19:38.835 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:19:38.835 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:19:39.095 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ d01e64ef-6ae9-45d5-a496-911f4c4fe43e == \d\0\1\e\6\4\e\f\-\6\a\e\9\-\4\5\d\5\-\a\4\9\6\-\9\1\1\f\4\c\4\f\e\4\3\e ]] 00:19:39.095 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:39.355 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:39.355 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid c1bdb824-0bc0-4480-a428-7c7c8ed050c1 00:19:39.355 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:39.355 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C1BDB8240BC04480A4287C7C8ED050C1 00:19:39.355 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:39.355 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C1BDB8240BC04480A4287C7C8ED050C1 00:19:39.355 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:39.355 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:39.355 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:39.355 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:39.355 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:39.355 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:39.355 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:39.355 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:19:39.355 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C1BDB8240BC04480A4287C7C8ED050C1 00:19:39.616 [2024-12-13 19:14:13.867978] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:19:39.616 [2024-12-13 19:14:13.868013] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:19:39.616 [2024-12-13 19:14:13.868025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:39.616 request: 00:19:39.616 { 00:19:39.616 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.616 "namespace": { 00:19:39.616 "bdev_name": "invalid", 00:19:39.616 "nsid": 1, 00:19:39.616 "nguid": "C1BDB8240BC04480A4287C7C8ED050C1", 00:19:39.616 "no_auto_visible": false, 00:19:39.616 "hide_metadata": false 00:19:39.616 }, 00:19:39.616 "method": "nvmf_subsystem_add_ns", 00:19:39.616 "req_id": 1 00:19:39.616 } 00:19:39.616 Got JSON-RPC error response 00:19:39.616 response: 00:19:39.616 { 00:19:39.616 "code": -32602, 00:19:39.616 "message": "Invalid parameters" 00:19:39.616 } 00:19:39.616 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:39.616 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:39.616 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:39.616 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:39.616 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid c1bdb824-0bc0-4480-a428-7c7c8ed050c1 00:19:39.616 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:39.616 19:14:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C1BDB8240BC04480A4287C7C8ED050C1 -i 00:19:39.879 19:14:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:19:41.791 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:19:41.791 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:19:41.791 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:42.050 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:19:42.050 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 312289 00:19:42.050 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 312289 ']' 00:19:42.050 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 312289 00:19:42.050 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:19:42.050 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.050 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 312289 00:19:42.050 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:42.050 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:42.050 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 312289' 00:19:42.050 killing process with pid 312289 00:19:42.050 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 312289 00:19:42.050 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 312289 00:19:42.310 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:42.571 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:42.571 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:19:42.571 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:42.571 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:19:42.571 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:42.571 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:42.571 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:19:42.571 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:42.571 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:42.571 rmmod nvme_rdma 00:19:42.571 rmmod nvme_fabrics 00:19:42.571 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:42.571 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:19:42.571 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:19:42.571 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 310054 ']' 00:19:42.571 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 310054 00:19:42.571 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 310054 ']' 00:19:42.571 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 310054 00:19:42.571 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:19:42.571 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.571 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 310054 00:19:42.832 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:42.832 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:42.832 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 310054' 00:19:42.832 killing process with pid 310054 00:19:42.832 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 310054 00:19:42.832 19:14:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 310054 00:19:42.832 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:42.832 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:42.832 00:19:42.832 real 0m26.197s 00:19:42.832 user 0m32.372s 00:19:42.832 sys 0m8.094s 00:19:42.832 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.832 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:42.832 ************************************ 00:19:42.832 END TEST nvmf_ns_masking 00:19:42.832 ************************************ 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:43.093 ************************************ 00:19:43.093 START TEST nvmf_nvme_cli 00:19:43.093 ************************************ 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:19:43.093 * Looking for test storage... 00:19:43.093 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:43.093 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:43.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.354 --rc genhtml_branch_coverage=1 00:19:43.354 --rc genhtml_function_coverage=1 00:19:43.354 --rc genhtml_legend=1 00:19:43.354 --rc geninfo_all_blocks=1 00:19:43.354 --rc geninfo_unexecuted_blocks=1 00:19:43.354 00:19:43.354 ' 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:43.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.354 --rc genhtml_branch_coverage=1 00:19:43.354 --rc genhtml_function_coverage=1 00:19:43.354 --rc genhtml_legend=1 00:19:43.354 --rc geninfo_all_blocks=1 00:19:43.354 --rc geninfo_unexecuted_blocks=1 00:19:43.354 00:19:43.354 ' 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:43.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.354 --rc genhtml_branch_coverage=1 00:19:43.354 --rc genhtml_function_coverage=1 00:19:43.354 --rc genhtml_legend=1 00:19:43.354 --rc geninfo_all_blocks=1 00:19:43.354 --rc geninfo_unexecuted_blocks=1 00:19:43.354 00:19:43.354 ' 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:43.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.354 --rc genhtml_branch_coverage=1 00:19:43.354 --rc genhtml_function_coverage=1 00:19:43.354 --rc genhtml_legend=1 00:19:43.354 --rc geninfo_all_blocks=1 00:19:43.354 --rc geninfo_unexecuted_blocks=1 00:19:43.354 00:19:43.354 ' 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:43.354 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:43.355 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:19:43.355 19:14:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:51.488 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:51.488 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:19:51.488 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:51.488 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:51.488 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:51.488 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:51.488 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:51.488 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:19:51.488 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:51.488 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:19:51.488 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:19:51.488 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:19:51.488 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:19:51.488 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:19:51.488 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:19:51.488 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:51.488 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:51.488 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:51.488 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:51.488 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:51.488 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:51.488 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:51.488 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:51.488 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:51.489 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:51.489 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:51.489 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:51.489 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # rdma_device_init 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:51.489 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:51.489 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:51.489 altname enp217s0f0np0 00:19:51.489 altname ens818f0np0 00:19:51.489 inet 192.168.100.8/24 scope global mlx_0_0 00:19:51.489 valid_lft forever preferred_lft forever 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:51.489 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:51.489 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:51.489 altname enp217s0f1np1 00:19:51.489 altname ens818f1np1 00:19:51.489 inet 192.168.100.9/24 scope global mlx_0_1 00:19:51.489 valid_lft forever preferred_lft forever 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:51.489 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:51.490 192.168.100.9' 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:51.490 192.168.100.9' 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # head -n 1 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:51.490 192.168.100.9' 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # tail -n +2 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # head -n 1 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=316796 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 316796 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 316796 ']' 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:51.490 [2024-12-13 19:14:24.689027] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:51.490 [2024-12-13 19:14:24.689092] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.490 [2024-12-13 19:14:24.779113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:51.490 [2024-12-13 19:14:24.804099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.490 [2024-12-13 19:14:24.804140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.490 [2024-12-13 19:14:24.804149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.490 [2024-12-13 19:14:24.804158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.490 [2024-12-13 19:14:24.804165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.490 [2024-12-13 19:14:24.805968] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.490 [2024-12-13 19:14:24.806104] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.490 [2024-12-13 19:14:24.806154] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.490 [2024-12-13 19:14:24.806154] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.490 19:14:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:51.490 [2024-12-13 19:14:24.976209] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ba9540/0x1bad9f0) succeed. 00:19:51.490 [2024-12-13 19:14:24.985376] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1baab80/0x1bef090) succeed. 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:51.490 Malloc0 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:51.490 Malloc1 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:51.490 [2024-12-13 19:14:25.195469] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.490 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:51.491 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.491 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:19:51.491 00:19:51.491 Discovery Log Number of Records 2, Generation counter 2 00:19:51.491 =====Discovery Log Entry 0====== 00:19:51.491 trtype: rdma 00:19:51.491 adrfam: ipv4 00:19:51.491 subtype: current discovery subsystem 00:19:51.491 treq: not required 00:19:51.491 portid: 0 00:19:51.491 trsvcid: 4420 00:19:51.491 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:51.491 traddr: 192.168.100.8 00:19:51.491 eflags: explicit discovery connections, duplicate discovery information 00:19:51.491 rdma_prtype: not specified 00:19:51.491 rdma_qptype: connected 00:19:51.491 rdma_cms: rdma-cm 00:19:51.491 rdma_pkey: 0x0000 00:19:51.491 =====Discovery Log Entry 1====== 00:19:51.491 trtype: rdma 00:19:51.491 adrfam: ipv4 00:19:51.491 subtype: nvme subsystem 00:19:51.491 treq: not required 00:19:51.491 portid: 0 00:19:51.491 trsvcid: 4420 00:19:51.491 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:51.491 traddr: 192.168.100.8 00:19:51.491 eflags: none 00:19:51.491 rdma_prtype: not specified 00:19:51.491 rdma_qptype: connected 00:19:51.491 rdma_cms: rdma-cm 00:19:51.491 rdma_pkey: 0x0000 00:19:51.491 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:51.491 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:51.491 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:51.491 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:51.491 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:51.491 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:51.491 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:51.491 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:51.491 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:51.491 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:19:51.491 19:14:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:52.061 19:14:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:52.061 19:14:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:19:52.061 19:14:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:52.061 19:14:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:52.061 19:14:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:52.061 19:14:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:19:53.972 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:53.972 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:53.972 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:53.972 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:53.972 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:53.972 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:19:53.972 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:53.972 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:54.232 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:54.232 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:54.232 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:54.232 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:54.232 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:54.232 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:54.232 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:54.232 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:54.232 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:54.232 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:54.232 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:54.233 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:54.233 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:54.233 /dev/nvme0n2 ]] 00:19:54.233 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:54.233 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:54.233 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:54.233 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:54.233 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:54.233 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:54.233 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:54.233 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:54.233 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:54.233 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:54.233 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:54.233 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:54.233 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:54.233 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:54.233 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:54.233 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:54.233 19:14:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:55.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:55.172 rmmod nvme_rdma 00:19:55.172 rmmod nvme_fabrics 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 316796 ']' 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 316796 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 316796 ']' 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 316796 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 316796 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 316796' 00:19:55.172 killing process with pid 316796 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 316796 00:19:55.172 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 316796 00:19:55.742 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:55.742 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:55.742 00:19:55.742 real 0m12.544s 00:19:55.742 user 0m21.852s 00:19:55.742 sys 0m6.162s 00:19:55.742 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:55.742 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:55.742 ************************************ 00:19:55.742 END TEST nvmf_nvme_cli 00:19:55.742 ************************************ 00:19:55.742 19:14:29 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:19:55.742 19:14:29 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:19:55.742 19:14:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:55.742 19:14:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:55.742 19:14:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:55.742 ************************************ 00:19:55.742 START TEST nvmf_auth_target 00:19:55.742 ************************************ 00:19:55.742 19:14:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:19:55.742 * Looking for test storage... 00:19:55.742 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:55.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.742 --rc genhtml_branch_coverage=1 00:19:55.742 --rc genhtml_function_coverage=1 00:19:55.742 --rc genhtml_legend=1 00:19:55.742 --rc geninfo_all_blocks=1 00:19:55.742 --rc geninfo_unexecuted_blocks=1 00:19:55.742 00:19:55.742 ' 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:55.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.742 --rc genhtml_branch_coverage=1 00:19:55.742 --rc genhtml_function_coverage=1 00:19:55.742 --rc genhtml_legend=1 00:19:55.742 --rc geninfo_all_blocks=1 00:19:55.742 --rc geninfo_unexecuted_blocks=1 00:19:55.742 00:19:55.742 ' 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:55.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.742 --rc genhtml_branch_coverage=1 00:19:55.742 --rc genhtml_function_coverage=1 00:19:55.742 --rc genhtml_legend=1 00:19:55.742 --rc geninfo_all_blocks=1 00:19:55.742 --rc geninfo_unexecuted_blocks=1 00:19:55.742 00:19:55.742 ' 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:55.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.742 --rc genhtml_branch_coverage=1 00:19:55.742 --rc genhtml_function_coverage=1 00:19:55.742 --rc genhtml_legend=1 00:19:55.742 --rc geninfo_all_blocks=1 00:19:55.742 --rc geninfo_unexecuted_blocks=1 00:19:55.742 00:19:55.742 ' 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:55.742 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:56.003 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:56.003 19:14:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.141 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:04.141 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:20:04.141 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:04.141 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:04.141 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:04.141 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:04.141 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:04.141 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:20:04.141 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:04.141 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:20:04.141 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:20:04.141 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:20:04.141 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:20:04.141 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:20:04.141 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:20:04.141 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:04.141 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:04.142 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:04.142 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:04.142 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:04.142 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # rdma_device_init 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:04.142 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:04.142 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:04.142 altname enp217s0f0np0 00:20:04.142 altname ens818f0np0 00:20:04.142 inet 192.168.100.8/24 scope global mlx_0_0 00:20:04.142 valid_lft forever preferred_lft forever 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:04.142 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:04.142 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:04.142 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:04.142 altname enp217s0f1np1 00:20:04.143 altname ens818f1np1 00:20:04.143 inet 192.168.100.9/24 scope global mlx_0_1 00:20:04.143 valid_lft forever preferred_lft forever 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:04.143 192.168.100.9' 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:04.143 192.168.100.9' 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # head -n 1 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:04.143 192.168.100.9' 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # tail -n +2 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # head -n 1 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=321044 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 321044 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 321044 ']' 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=321176 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b15fd21bcf2ac30a5e1b4b32551f3cae4ea64c49e4b96928 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Ryy 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b15fd21bcf2ac30a5e1b4b32551f3cae4ea64c49e4b96928 0 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b15fd21bcf2ac30a5e1b4b32551f3cae4ea64c49e4b96928 0 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b15fd21bcf2ac30a5e1b4b32551f3cae4ea64c49e4b96928 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Ryy 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Ryy 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Ryy 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2cd333ea4428553705f9c5547dde6d2d00646038831fd8641ded7f8043e92aa6 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Ly6 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2cd333ea4428553705f9c5547dde6d2d00646038831fd8641ded7f8043e92aa6 3 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2cd333ea4428553705f9c5547dde6d2d00646038831fd8641ded7f8043e92aa6 3 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:04.143 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2cd333ea4428553705f9c5547dde6d2d00646038831fd8641ded7f8043e92aa6 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Ly6 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Ly6 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Ly6 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=92fb7cfe2ae0fc7c5dd4fcbe0a193b96 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.w61 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 92fb7cfe2ae0fc7c5dd4fcbe0a193b96 1 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 92fb7cfe2ae0fc7c5dd4fcbe0a193b96 1 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=92fb7cfe2ae0fc7c5dd4fcbe0a193b96 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.w61 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.w61 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.w61 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=390a5209ed56bec58bacac782e3324654e0284896857f827 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.AHm 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 390a5209ed56bec58bacac782e3324654e0284896857f827 2 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 390a5209ed56bec58bacac782e3324654e0284896857f827 2 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=390a5209ed56bec58bacac782e3324654e0284896857f827 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.AHm 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.AHm 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.AHm 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0fcf9f50eb297ecc573918c5948920284dafacfb9fc6b960 00:20:04.144 19:14:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.eBA 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0fcf9f50eb297ecc573918c5948920284dafacfb9fc6b960 2 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0fcf9f50eb297ecc573918c5948920284dafacfb9fc6b960 2 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0fcf9f50eb297ecc573918c5948920284dafacfb9fc6b960 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.eBA 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.eBA 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.eBA 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bd40568d8f3ecdac02aa31ac94efedc9 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.CrG 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bd40568d8f3ecdac02aa31ac94efedc9 1 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bd40568d8f3ecdac02aa31ac94efedc9 1 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bd40568d8f3ecdac02aa31ac94efedc9 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.CrG 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.CrG 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.CrG 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=86a15ff5c1d3ccbb0bf69f2577d940ecd41fb6d74615aec84d3c6f3cfd95c70f 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.BV1 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 86a15ff5c1d3ccbb0bf69f2577d940ecd41fb6d74615aec84d3c6f3cfd95c70f 3 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 86a15ff5c1d3ccbb0bf69f2577d940ecd41fb6d74615aec84d3c6f3cfd95c70f 3 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=86a15ff5c1d3ccbb0bf69f2577d940ecd41fb6d74615aec84d3c6f3cfd95c70f 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.BV1 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.BV1 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.BV1 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:20:04.144 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 321044 00:20:04.145 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 321044 ']' 00:20:04.145 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.145 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.145 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.145 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.145 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.145 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.145 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:04.145 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 321176 /var/tmp/host.sock 00:20:04.145 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 321176 ']' 00:20:04.145 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:20:04.145 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.145 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:04.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:04.145 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.145 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.405 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.405 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:04.405 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:20:04.405 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.405 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.405 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.405 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:04.405 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Ryy 00:20:04.405 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.405 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.405 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.405 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Ryy 00:20:04.405 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Ryy 00:20:04.666 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Ly6 ]] 00:20:04.666 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Ly6 00:20:04.666 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.666 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.666 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.666 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Ly6 00:20:04.666 19:14:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Ly6 00:20:04.666 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:04.666 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.w61 00:20:04.666 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.666 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.927 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.927 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.w61 00:20:04.927 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.w61 00:20:04.927 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.AHm ]] 00:20:04.927 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.AHm 00:20:04.927 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.927 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.927 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.927 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.AHm 00:20:04.927 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.AHm 00:20:05.187 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:05.187 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.eBA 00:20:05.187 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.187 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.187 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.187 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.eBA 00:20:05.187 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.eBA 00:20:05.447 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.CrG ]] 00:20:05.447 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.CrG 00:20:05.447 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.447 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.447 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.447 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.CrG 00:20:05.447 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.CrG 00:20:05.707 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:05.707 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.BV1 00:20:05.707 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.707 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.707 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.707 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.BV1 00:20:05.707 19:14:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.BV1 00:20:05.707 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:20:05.707 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:05.707 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.707 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.707 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:05.708 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:05.968 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:20:05.968 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.968 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:05.968 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:05.968 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:05.968 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.968 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.968 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.968 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.968 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.968 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.968 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.968 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.228 00:20:06.228 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.228 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.228 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.488 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.488 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.488 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.488 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.488 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.488 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.488 { 00:20:06.488 "cntlid": 1, 00:20:06.488 "qid": 0, 00:20:06.488 "state": "enabled", 00:20:06.488 "thread": "nvmf_tgt_poll_group_000", 00:20:06.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:06.488 "listen_address": { 00:20:06.488 "trtype": "RDMA", 00:20:06.488 "adrfam": "IPv4", 00:20:06.488 "traddr": "192.168.100.8", 00:20:06.488 "trsvcid": "4420" 00:20:06.488 }, 00:20:06.488 "peer_address": { 00:20:06.488 "trtype": "RDMA", 00:20:06.488 "adrfam": "IPv4", 00:20:06.488 "traddr": "192.168.100.8", 00:20:06.488 "trsvcid": "46634" 00:20:06.488 }, 00:20:06.488 "auth": { 00:20:06.488 "state": "completed", 00:20:06.488 "digest": "sha256", 00:20:06.488 "dhgroup": "null" 00:20:06.488 } 00:20:06.488 } 00:20:06.488 ]' 00:20:06.488 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.488 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.488 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.488 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:06.488 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.488 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.488 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.488 19:14:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.748 19:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:20:06.748 19:14:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:20:10.043 19:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.303 19:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:10.303 19:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.303 19:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.303 19:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.303 19:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.303 19:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:10.303 19:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:10.303 19:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:10.303 19:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.303 19:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:10.303 19:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:10.303 19:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:10.303 19:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.303 19:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.303 19:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.303 19:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.303 19:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.304 19:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.304 19:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.304 19:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.563 00:20:10.563 19:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.563 19:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.563 19:14:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.832 19:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.832 19:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.832 19:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.832 19:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.832 19:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.832 19:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.832 { 00:20:10.832 "cntlid": 3, 00:20:10.832 "qid": 0, 00:20:10.832 "state": "enabled", 00:20:10.832 "thread": "nvmf_tgt_poll_group_000", 00:20:10.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:10.832 "listen_address": { 00:20:10.832 "trtype": "RDMA", 00:20:10.832 "adrfam": "IPv4", 00:20:10.832 "traddr": "192.168.100.8", 00:20:10.832 "trsvcid": "4420" 00:20:10.832 }, 00:20:10.832 "peer_address": { 00:20:10.832 "trtype": "RDMA", 00:20:10.832 "adrfam": "IPv4", 00:20:10.832 "traddr": "192.168.100.8", 00:20:10.832 "trsvcid": "52042" 00:20:10.832 }, 00:20:10.832 "auth": { 00:20:10.832 "state": "completed", 00:20:10.832 "digest": "sha256", 00:20:10.832 "dhgroup": "null" 00:20:10.832 } 00:20:10.832 } 00:20:10.832 ]' 00:20:10.832 19:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.832 19:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.832 19:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.832 19:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:10.832 19:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.093 19:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.093 19:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.093 19:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.093 19:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:20:11.093 19:14:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:20:12.032 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.032 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:12.032 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.032 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.032 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.032 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.032 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:12.032 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:12.292 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:20:12.292 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.292 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:12.292 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:12.292 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:12.292 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.292 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.292 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.292 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.292 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.292 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.292 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.292 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.552 00:20:12.552 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.552 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.552 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.552 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.552 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.552 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.552 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.552 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.552 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.552 { 00:20:12.552 "cntlid": 5, 00:20:12.552 "qid": 0, 00:20:12.552 "state": "enabled", 00:20:12.552 "thread": "nvmf_tgt_poll_group_000", 00:20:12.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:12.552 "listen_address": { 00:20:12.552 "trtype": "RDMA", 00:20:12.552 "adrfam": "IPv4", 00:20:12.552 "traddr": "192.168.100.8", 00:20:12.552 "trsvcid": "4420" 00:20:12.552 }, 00:20:12.552 "peer_address": { 00:20:12.552 "trtype": "RDMA", 00:20:12.552 "adrfam": "IPv4", 00:20:12.552 "traddr": "192.168.100.8", 00:20:12.552 "trsvcid": "48801" 00:20:12.552 }, 00:20:12.552 "auth": { 00:20:12.552 "state": "completed", 00:20:12.552 "digest": "sha256", 00:20:12.553 "dhgroup": "null" 00:20:12.553 } 00:20:12.553 } 00:20:12.553 ]' 00:20:12.553 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.812 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:12.812 19:14:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.812 19:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:12.812 19:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.812 19:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.812 19:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.812 19:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.072 19:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:20:13.073 19:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:20:13.642 19:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.642 19:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:13.642 19:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.642 19:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.642 19:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.643 19:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.643 19:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:13.643 19:14:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:13.903 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:13.903 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.903 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:13.903 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:13.903 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:13.903 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.903 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:13.903 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.903 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.903 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.903 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:13.903 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:13.903 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.247 00:20:14.247 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.247 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.247 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.531 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.531 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.531 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.531 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.531 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.531 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.531 { 00:20:14.531 "cntlid": 7, 00:20:14.531 "qid": 0, 00:20:14.531 "state": "enabled", 00:20:14.531 "thread": "nvmf_tgt_poll_group_000", 00:20:14.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:14.531 "listen_address": { 00:20:14.531 "trtype": "RDMA", 00:20:14.531 "adrfam": "IPv4", 00:20:14.531 "traddr": "192.168.100.8", 00:20:14.531 "trsvcid": "4420" 00:20:14.531 }, 00:20:14.531 "peer_address": { 00:20:14.531 "trtype": "RDMA", 00:20:14.531 "adrfam": "IPv4", 00:20:14.531 "traddr": "192.168.100.8", 00:20:14.531 "trsvcid": "34636" 00:20:14.531 }, 00:20:14.531 "auth": { 00:20:14.531 "state": "completed", 00:20:14.531 "digest": "sha256", 00:20:14.531 "dhgroup": "null" 00:20:14.531 } 00:20:14.531 } 00:20:14.531 ]' 00:20:14.531 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.531 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.531 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.531 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:14.531 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.531 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.531 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.531 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.804 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:20:14.804 19:14:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:20:15.411 19:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.411 19:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:15.411 19:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.411 19:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.411 19:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.411 19:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:15.411 19:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.411 19:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:15.411 19:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:15.689 19:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:15.689 19:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.689 19:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:15.689 19:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:15.689 19:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:15.689 19:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.689 19:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.689 19:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.689 19:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.689 19:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.689 19:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.689 19:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.689 19:14:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.968 00:20:15.968 19:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.968 19:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.968 19:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.248 19:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.248 19:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.248 19:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.248 19:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.248 19:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.248 19:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.248 { 00:20:16.248 "cntlid": 9, 00:20:16.248 "qid": 0, 00:20:16.248 "state": "enabled", 00:20:16.248 "thread": "nvmf_tgt_poll_group_000", 00:20:16.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:16.248 "listen_address": { 00:20:16.248 "trtype": "RDMA", 00:20:16.248 "adrfam": "IPv4", 00:20:16.248 "traddr": "192.168.100.8", 00:20:16.248 "trsvcid": "4420" 00:20:16.248 }, 00:20:16.248 "peer_address": { 00:20:16.248 "trtype": "RDMA", 00:20:16.248 "adrfam": "IPv4", 00:20:16.248 "traddr": "192.168.100.8", 00:20:16.248 "trsvcid": "41693" 00:20:16.248 }, 00:20:16.248 "auth": { 00:20:16.248 "state": "completed", 00:20:16.248 "digest": "sha256", 00:20:16.248 "dhgroup": "ffdhe2048" 00:20:16.248 } 00:20:16.248 } 00:20:16.248 ]' 00:20:16.248 19:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.248 19:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.248 19:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.248 19:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:16.248 19:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.248 19:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.248 19:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.248 19:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.532 19:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:20:16.532 19:14:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:20:17.128 19:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.128 19:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:17.128 19:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.128 19:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.128 19:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.128 19:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.128 19:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:17.128 19:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:17.394 19:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:20:17.394 19:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.394 19:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:17.394 19:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:17.394 19:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:17.394 19:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.394 19:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.394 19:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.394 19:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.394 19:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.394 19:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.394 19:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.394 19:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.664 00:20:17.664 19:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.664 19:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.664 19:14:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.938 19:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.938 19:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.938 19:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.938 19:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.938 19:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.938 19:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.938 { 00:20:17.938 "cntlid": 11, 00:20:17.938 "qid": 0, 00:20:17.938 "state": "enabled", 00:20:17.938 "thread": "nvmf_tgt_poll_group_000", 00:20:17.938 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:17.938 "listen_address": { 00:20:17.938 "trtype": "RDMA", 00:20:17.938 "adrfam": "IPv4", 00:20:17.938 "traddr": "192.168.100.8", 00:20:17.938 "trsvcid": "4420" 00:20:17.938 }, 00:20:17.938 "peer_address": { 00:20:17.938 "trtype": "RDMA", 00:20:17.938 "adrfam": "IPv4", 00:20:17.938 "traddr": "192.168.100.8", 00:20:17.938 "trsvcid": "59119" 00:20:17.938 }, 00:20:17.938 "auth": { 00:20:17.938 "state": "completed", 00:20:17.938 "digest": "sha256", 00:20:17.938 "dhgroup": "ffdhe2048" 00:20:17.938 } 00:20:17.938 } 00:20:17.938 ]' 00:20:17.938 19:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.938 19:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.939 19:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.939 19:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:17.939 19:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.939 19:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.939 19:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.939 19:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.221 19:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:20:18.221 19:14:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:20:18.807 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.067 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:19.067 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.067 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.067 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.067 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.067 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:19.067 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:19.067 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:19.067 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.067 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:19.067 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:19.067 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:19.067 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.067 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.067 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.067 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.067 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.067 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.067 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.067 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.327 00:20:19.327 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.327 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.327 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.587 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.587 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.587 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.587 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.587 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.587 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.587 { 00:20:19.587 "cntlid": 13, 00:20:19.587 "qid": 0, 00:20:19.587 "state": "enabled", 00:20:19.587 "thread": "nvmf_tgt_poll_group_000", 00:20:19.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:19.587 "listen_address": { 00:20:19.587 "trtype": "RDMA", 00:20:19.587 "adrfam": "IPv4", 00:20:19.587 "traddr": "192.168.100.8", 00:20:19.587 "trsvcid": "4420" 00:20:19.587 }, 00:20:19.587 "peer_address": { 00:20:19.587 "trtype": "RDMA", 00:20:19.587 "adrfam": "IPv4", 00:20:19.587 "traddr": "192.168.100.8", 00:20:19.587 "trsvcid": "37525" 00:20:19.587 }, 00:20:19.587 "auth": { 00:20:19.587 "state": "completed", 00:20:19.587 "digest": "sha256", 00:20:19.587 "dhgroup": "ffdhe2048" 00:20:19.587 } 00:20:19.587 } 00:20:19.587 ]' 00:20:19.587 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.587 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.587 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.847 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:19.847 19:14:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.847 19:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.847 19:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.847 19:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.107 19:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:20:20.107 19:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:20:20.676 19:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.676 19:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:20.676 19:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.676 19:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.676 19:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.676 19:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.676 19:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:20.676 19:14:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:20.938 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:20.938 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.938 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:20.938 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:20.938 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:20.938 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.938 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:20.938 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.938 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.938 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.938 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:20.938 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:20.938 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:21.197 00:20:21.197 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.197 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.197 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.457 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.457 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.457 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.457 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.457 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.457 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.457 { 00:20:21.457 "cntlid": 15, 00:20:21.457 "qid": 0, 00:20:21.457 "state": "enabled", 00:20:21.457 "thread": "nvmf_tgt_poll_group_000", 00:20:21.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:21.457 "listen_address": { 00:20:21.457 "trtype": "RDMA", 00:20:21.457 "adrfam": "IPv4", 00:20:21.457 "traddr": "192.168.100.8", 00:20:21.457 "trsvcid": "4420" 00:20:21.457 }, 00:20:21.457 "peer_address": { 00:20:21.457 "trtype": "RDMA", 00:20:21.457 "adrfam": "IPv4", 00:20:21.457 "traddr": "192.168.100.8", 00:20:21.457 "trsvcid": "57380" 00:20:21.457 }, 00:20:21.457 "auth": { 00:20:21.457 "state": "completed", 00:20:21.457 "digest": "sha256", 00:20:21.457 "dhgroup": "ffdhe2048" 00:20:21.457 } 00:20:21.457 } 00:20:21.457 ]' 00:20:21.457 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.457 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.457 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.457 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:21.457 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.457 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.457 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.457 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.716 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:20:21.716 19:14:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:20:22.286 19:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.546 19:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:22.546 19:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.546 19:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.546 19:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.546 19:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.546 19:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.546 19:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:22.546 19:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:22.546 19:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:22.546 19:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.546 19:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:22.546 19:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:22.546 19:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:22.546 19:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.546 19:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.546 19:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.546 19:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.546 19:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.546 19:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.546 19:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.547 19:14:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.807 00:20:23.066 19:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.066 19:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.066 19:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.066 19:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.066 19:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.066 19:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.066 19:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.066 19:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.066 19:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.066 { 00:20:23.066 "cntlid": 17, 00:20:23.066 "qid": 0, 00:20:23.066 "state": "enabled", 00:20:23.066 "thread": "nvmf_tgt_poll_group_000", 00:20:23.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:23.066 "listen_address": { 00:20:23.066 "trtype": "RDMA", 00:20:23.066 "adrfam": "IPv4", 00:20:23.066 "traddr": "192.168.100.8", 00:20:23.066 "trsvcid": "4420" 00:20:23.066 }, 00:20:23.066 "peer_address": { 00:20:23.066 "trtype": "RDMA", 00:20:23.066 "adrfam": "IPv4", 00:20:23.066 "traddr": "192.168.100.8", 00:20:23.066 "trsvcid": "57285" 00:20:23.066 }, 00:20:23.066 "auth": { 00:20:23.066 "state": "completed", 00:20:23.066 "digest": "sha256", 00:20:23.066 "dhgroup": "ffdhe3072" 00:20:23.066 } 00:20:23.066 } 00:20:23.066 ]' 00:20:23.066 19:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.066 19:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.066 19:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.326 19:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:23.326 19:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.326 19:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.326 19:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.326 19:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.585 19:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:20:23.585 19:14:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:20:24.156 19:14:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.156 19:14:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:24.156 19:14:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.156 19:14:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.156 19:14:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.156 19:14:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.156 19:14:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:24.156 19:14:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:24.416 19:14:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:24.416 19:14:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.416 19:14:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:24.416 19:14:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:24.416 19:14:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:24.416 19:14:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.416 19:14:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.416 19:14:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.416 19:14:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.416 19:14:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.416 19:14:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.416 19:14:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.416 19:14:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.676 00:20:24.676 19:14:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.676 19:14:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.676 19:14:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.936 19:14:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.936 19:14:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.936 19:14:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.936 19:14:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.936 19:14:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.936 19:14:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.936 { 00:20:24.936 "cntlid": 19, 00:20:24.936 "qid": 0, 00:20:24.936 "state": "enabled", 00:20:24.936 "thread": "nvmf_tgt_poll_group_000", 00:20:24.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:24.936 "listen_address": { 00:20:24.936 "trtype": "RDMA", 00:20:24.936 "adrfam": "IPv4", 00:20:24.936 "traddr": "192.168.100.8", 00:20:24.936 "trsvcid": "4420" 00:20:24.936 }, 00:20:24.936 "peer_address": { 00:20:24.936 "trtype": "RDMA", 00:20:24.936 "adrfam": "IPv4", 00:20:24.936 "traddr": "192.168.100.8", 00:20:24.936 "trsvcid": "52485" 00:20:24.936 }, 00:20:24.936 "auth": { 00:20:24.936 "state": "completed", 00:20:24.936 "digest": "sha256", 00:20:24.936 "dhgroup": "ffdhe3072" 00:20:24.936 } 00:20:24.936 } 00:20:24.936 ]' 00:20:24.936 19:14:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.936 19:14:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.936 19:14:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.936 19:14:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:24.936 19:14:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.936 19:14:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.936 19:14:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.936 19:14:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.196 19:14:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:20:25.196 19:14:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:20:25.767 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.027 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:26.027 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.027 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.027 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.027 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.027 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:26.027 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:26.287 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:26.287 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.287 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:26.287 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:26.287 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:26.287 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.287 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.287 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.287 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.287 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.287 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.287 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.287 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.547 00:20:26.547 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.547 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.547 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.547 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.547 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.547 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.547 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.807 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.807 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.807 { 00:20:26.807 "cntlid": 21, 00:20:26.807 "qid": 0, 00:20:26.807 "state": "enabled", 00:20:26.807 "thread": "nvmf_tgt_poll_group_000", 00:20:26.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:26.807 "listen_address": { 00:20:26.807 "trtype": "RDMA", 00:20:26.807 "adrfam": "IPv4", 00:20:26.807 "traddr": "192.168.100.8", 00:20:26.807 "trsvcid": "4420" 00:20:26.807 }, 00:20:26.807 "peer_address": { 00:20:26.807 "trtype": "RDMA", 00:20:26.807 "adrfam": "IPv4", 00:20:26.807 "traddr": "192.168.100.8", 00:20:26.807 "trsvcid": "45677" 00:20:26.807 }, 00:20:26.807 "auth": { 00:20:26.807 "state": "completed", 00:20:26.807 "digest": "sha256", 00:20:26.807 "dhgroup": "ffdhe3072" 00:20:26.807 } 00:20:26.807 } 00:20:26.807 ]' 00:20:26.807 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.807 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.807 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.807 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:26.807 19:15:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.807 19:15:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.807 19:15:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.807 19:15:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.067 19:15:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:20:27.067 19:15:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:20:27.637 19:15:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.637 19:15:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:27.637 19:15:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.637 19:15:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.637 19:15:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.637 19:15:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.637 19:15:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:27.637 19:15:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:27.897 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:27.897 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.897 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:27.897 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:27.897 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:27.897 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.897 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:27.897 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.897 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.897 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.897 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:27.897 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.897 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:28.156 00:20:28.156 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.156 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.157 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.416 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.416 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.416 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.416 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.416 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.416 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.416 { 00:20:28.416 "cntlid": 23, 00:20:28.416 "qid": 0, 00:20:28.416 "state": "enabled", 00:20:28.416 "thread": "nvmf_tgt_poll_group_000", 00:20:28.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:28.416 "listen_address": { 00:20:28.416 "trtype": "RDMA", 00:20:28.416 "adrfam": "IPv4", 00:20:28.416 "traddr": "192.168.100.8", 00:20:28.416 "trsvcid": "4420" 00:20:28.416 }, 00:20:28.416 "peer_address": { 00:20:28.416 "trtype": "RDMA", 00:20:28.416 "adrfam": "IPv4", 00:20:28.416 "traddr": "192.168.100.8", 00:20:28.416 "trsvcid": "47398" 00:20:28.416 }, 00:20:28.416 "auth": { 00:20:28.416 "state": "completed", 00:20:28.416 "digest": "sha256", 00:20:28.416 "dhgroup": "ffdhe3072" 00:20:28.416 } 00:20:28.416 } 00:20:28.416 ]' 00:20:28.416 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.416 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.416 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.416 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:28.416 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.676 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.676 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.676 19:15:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.676 19:15:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:20:28.676 19:15:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:20:29.617 19:15:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.617 19:15:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:29.617 19:15:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.617 19:15:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.617 19:15:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.617 19:15:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:29.617 19:15:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.617 19:15:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:29.617 19:15:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:29.617 19:15:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:29.617 19:15:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.617 19:15:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:29.617 19:15:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:29.617 19:15:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:29.617 19:15:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.617 19:15:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.617 19:15:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.617 19:15:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.617 19:15:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.617 19:15:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.617 19:15:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.617 19:15:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.877 00:20:29.877 19:15:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.877 19:15:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.877 19:15:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.136 19:15:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.137 19:15:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.137 19:15:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.137 19:15:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.137 19:15:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.137 19:15:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.137 { 00:20:30.137 "cntlid": 25, 00:20:30.137 "qid": 0, 00:20:30.137 "state": "enabled", 00:20:30.137 "thread": "nvmf_tgt_poll_group_000", 00:20:30.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:30.137 "listen_address": { 00:20:30.137 "trtype": "RDMA", 00:20:30.137 "adrfam": "IPv4", 00:20:30.137 "traddr": "192.168.100.8", 00:20:30.137 "trsvcid": "4420" 00:20:30.137 }, 00:20:30.137 "peer_address": { 00:20:30.137 "trtype": "RDMA", 00:20:30.137 "adrfam": "IPv4", 00:20:30.137 "traddr": "192.168.100.8", 00:20:30.137 "trsvcid": "40502" 00:20:30.137 }, 00:20:30.137 "auth": { 00:20:30.137 "state": "completed", 00:20:30.137 "digest": "sha256", 00:20:30.137 "dhgroup": "ffdhe4096" 00:20:30.137 } 00:20:30.137 } 00:20:30.137 ]' 00:20:30.137 19:15:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.137 19:15:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:30.137 19:15:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.397 19:15:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:30.397 19:15:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.397 19:15:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.397 19:15:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.397 19:15:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.397 19:15:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:20:30.397 19:15:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:20:31.335 19:15:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.335 19:15:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:31.335 19:15:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.335 19:15:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.335 19:15:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.335 19:15:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.335 19:15:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:31.335 19:15:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:31.335 19:15:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:31.335 19:15:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.335 19:15:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:31.335 19:15:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:31.335 19:15:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:31.335 19:15:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.335 19:15:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.335 19:15:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.335 19:15:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.335 19:15:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.335 19:15:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.335 19:15:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.335 19:15:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.595 00:20:31.855 19:15:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.855 19:15:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.855 19:15:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.855 19:15:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.855 19:15:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.855 19:15:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.855 19:15:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.855 19:15:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.855 19:15:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.855 { 00:20:31.855 "cntlid": 27, 00:20:31.855 "qid": 0, 00:20:31.855 "state": "enabled", 00:20:31.855 "thread": "nvmf_tgt_poll_group_000", 00:20:31.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:31.855 "listen_address": { 00:20:31.855 "trtype": "RDMA", 00:20:31.855 "adrfam": "IPv4", 00:20:31.855 "traddr": "192.168.100.8", 00:20:31.855 "trsvcid": "4420" 00:20:31.855 }, 00:20:31.855 "peer_address": { 00:20:31.855 "trtype": "RDMA", 00:20:31.855 "adrfam": "IPv4", 00:20:31.855 "traddr": "192.168.100.8", 00:20:31.855 "trsvcid": "53685" 00:20:31.855 }, 00:20:31.855 "auth": { 00:20:31.855 "state": "completed", 00:20:31.855 "digest": "sha256", 00:20:31.855 "dhgroup": "ffdhe4096" 00:20:31.855 } 00:20:31.855 } 00:20:31.855 ]' 00:20:31.855 19:15:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.114 19:15:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.114 19:15:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.114 19:15:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:32.114 19:15:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.114 19:15:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.114 19:15:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.114 19:15:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.374 19:15:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:20:32.375 19:15:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:20:32.944 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.944 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:32.944 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.944 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.944 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.944 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.944 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:32.944 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:33.204 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:33.204 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.204 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:33.204 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:33.204 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:33.204 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.204 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.204 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.204 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.204 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.204 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.204 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.204 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.464 00:20:33.464 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.464 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.464 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.724 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.724 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.724 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.724 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.724 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.724 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.724 { 00:20:33.724 "cntlid": 29, 00:20:33.724 "qid": 0, 00:20:33.725 "state": "enabled", 00:20:33.725 "thread": "nvmf_tgt_poll_group_000", 00:20:33.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:33.725 "listen_address": { 00:20:33.725 "trtype": "RDMA", 00:20:33.725 "adrfam": "IPv4", 00:20:33.725 "traddr": "192.168.100.8", 00:20:33.725 "trsvcid": "4420" 00:20:33.725 }, 00:20:33.725 "peer_address": { 00:20:33.725 "trtype": "RDMA", 00:20:33.725 "adrfam": "IPv4", 00:20:33.725 "traddr": "192.168.100.8", 00:20:33.725 "trsvcid": "47915" 00:20:33.725 }, 00:20:33.725 "auth": { 00:20:33.725 "state": "completed", 00:20:33.725 "digest": "sha256", 00:20:33.725 "dhgroup": "ffdhe4096" 00:20:33.725 } 00:20:33.725 } 00:20:33.725 ]' 00:20:33.725 19:15:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.725 19:15:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:33.725 19:15:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.725 19:15:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:33.725 19:15:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.725 19:15:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.725 19:15:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.725 19:15:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.984 19:15:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:20:33.984 19:15:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:20:34.554 19:15:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.814 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:34.814 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.814 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.814 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.814 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.814 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:34.814 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:35.074 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:35.074 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.074 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:35.074 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:35.074 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:35.074 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.074 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:35.074 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.074 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.074 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.074 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:35.074 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:35.074 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:35.334 00:20:35.334 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.334 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.334 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.334 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.334 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.334 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.334 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.334 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.334 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.334 { 00:20:35.334 "cntlid": 31, 00:20:35.334 "qid": 0, 00:20:35.334 "state": "enabled", 00:20:35.334 "thread": "nvmf_tgt_poll_group_000", 00:20:35.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:35.334 "listen_address": { 00:20:35.334 "trtype": "RDMA", 00:20:35.334 "adrfam": "IPv4", 00:20:35.334 "traddr": "192.168.100.8", 00:20:35.334 "trsvcid": "4420" 00:20:35.334 }, 00:20:35.334 "peer_address": { 00:20:35.334 "trtype": "RDMA", 00:20:35.334 "adrfam": "IPv4", 00:20:35.334 "traddr": "192.168.100.8", 00:20:35.334 "trsvcid": "45128" 00:20:35.334 }, 00:20:35.334 "auth": { 00:20:35.334 "state": "completed", 00:20:35.334 "digest": "sha256", 00:20:35.334 "dhgroup": "ffdhe4096" 00:20:35.334 } 00:20:35.334 } 00:20:35.334 ]' 00:20:35.334 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.594 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:35.594 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.594 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:35.594 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.594 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.594 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.594 19:15:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.853 19:15:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:20:35.853 19:15:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:20:36.424 19:15:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.424 19:15:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:36.424 19:15:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.424 19:15:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.424 19:15:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.424 19:15:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.424 19:15:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.424 19:15:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:36.424 19:15:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:36.684 19:15:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:36.684 19:15:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.684 19:15:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:36.684 19:15:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:36.684 19:15:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:36.684 19:15:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.684 19:15:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.684 19:15:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.684 19:15:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.684 19:15:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.684 19:15:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.684 19:15:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.684 19:15:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.943 00:20:37.202 19:15:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.202 19:15:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.202 19:15:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.202 19:15:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.202 19:15:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.202 19:15:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.202 19:15:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.202 19:15:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.202 19:15:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.202 { 00:20:37.202 "cntlid": 33, 00:20:37.202 "qid": 0, 00:20:37.202 "state": "enabled", 00:20:37.202 "thread": "nvmf_tgt_poll_group_000", 00:20:37.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:37.202 "listen_address": { 00:20:37.202 "trtype": "RDMA", 00:20:37.202 "adrfam": "IPv4", 00:20:37.202 "traddr": "192.168.100.8", 00:20:37.202 "trsvcid": "4420" 00:20:37.202 }, 00:20:37.202 "peer_address": { 00:20:37.202 "trtype": "RDMA", 00:20:37.202 "adrfam": "IPv4", 00:20:37.202 "traddr": "192.168.100.8", 00:20:37.202 "trsvcid": "51926" 00:20:37.202 }, 00:20:37.202 "auth": { 00:20:37.202 "state": "completed", 00:20:37.202 "digest": "sha256", 00:20:37.202 "dhgroup": "ffdhe6144" 00:20:37.202 } 00:20:37.202 } 00:20:37.202 ]' 00:20:37.202 19:15:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.202 19:15:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:37.202 19:15:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.462 19:15:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:37.462 19:15:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.462 19:15:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.462 19:15:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.462 19:15:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.721 19:15:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:20:37.721 19:15:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:20:38.291 19:15:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.291 19:15:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:38.291 19:15:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.291 19:15:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.291 19:15:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.291 19:15:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.291 19:15:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:38.291 19:15:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:38.551 19:15:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:38.551 19:15:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.551 19:15:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:38.551 19:15:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:38.551 19:15:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:38.551 19:15:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.551 19:15:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.551 19:15:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.551 19:15:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.551 19:15:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.551 19:15:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.551 19:15:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.551 19:15:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.811 00:20:38.811 19:15:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.811 19:15:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.811 19:15:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.070 19:15:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.070 19:15:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.070 19:15:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.071 19:15:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.071 19:15:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.071 19:15:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.071 { 00:20:39.071 "cntlid": 35, 00:20:39.071 "qid": 0, 00:20:39.071 "state": "enabled", 00:20:39.071 "thread": "nvmf_tgt_poll_group_000", 00:20:39.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:39.071 "listen_address": { 00:20:39.071 "trtype": "RDMA", 00:20:39.071 "adrfam": "IPv4", 00:20:39.071 "traddr": "192.168.100.8", 00:20:39.071 "trsvcid": "4420" 00:20:39.071 }, 00:20:39.071 "peer_address": { 00:20:39.071 "trtype": "RDMA", 00:20:39.071 "adrfam": "IPv4", 00:20:39.071 "traddr": "192.168.100.8", 00:20:39.071 "trsvcid": "40681" 00:20:39.071 }, 00:20:39.071 "auth": { 00:20:39.071 "state": "completed", 00:20:39.071 "digest": "sha256", 00:20:39.071 "dhgroup": "ffdhe6144" 00:20:39.071 } 00:20:39.071 } 00:20:39.071 ]' 00:20:39.071 19:15:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.071 19:15:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:39.071 19:15:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.331 19:15:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:39.331 19:15:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.331 19:15:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.331 19:15:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.331 19:15:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.331 19:15:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:20:39.331 19:15:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:20:40.271 19:15:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.271 19:15:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:40.271 19:15:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.271 19:15:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.271 19:15:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.271 19:15:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.271 19:15:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:40.271 19:15:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:40.531 19:15:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:40.531 19:15:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.531 19:15:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:40.531 19:15:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:40.531 19:15:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:40.531 19:15:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.531 19:15:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.531 19:15:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.531 19:15:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.531 19:15:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.531 19:15:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.531 19:15:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.531 19:15:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.802 00:20:40.802 19:15:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.802 19:15:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.802 19:15:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.063 19:15:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.063 19:15:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.063 19:15:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.063 19:15:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.063 19:15:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.063 19:15:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.063 { 00:20:41.063 "cntlid": 37, 00:20:41.063 "qid": 0, 00:20:41.063 "state": "enabled", 00:20:41.063 "thread": "nvmf_tgt_poll_group_000", 00:20:41.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:41.063 "listen_address": { 00:20:41.063 "trtype": "RDMA", 00:20:41.063 "adrfam": "IPv4", 00:20:41.063 "traddr": "192.168.100.8", 00:20:41.063 "trsvcid": "4420" 00:20:41.063 }, 00:20:41.063 "peer_address": { 00:20:41.063 "trtype": "RDMA", 00:20:41.063 "adrfam": "IPv4", 00:20:41.063 "traddr": "192.168.100.8", 00:20:41.063 "trsvcid": "52754" 00:20:41.063 }, 00:20:41.063 "auth": { 00:20:41.063 "state": "completed", 00:20:41.063 "digest": "sha256", 00:20:41.063 "dhgroup": "ffdhe6144" 00:20:41.063 } 00:20:41.063 } 00:20:41.063 ]' 00:20:41.063 19:15:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.063 19:15:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:41.063 19:15:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.063 19:15:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:41.063 19:15:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.063 19:15:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.063 19:15:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.063 19:15:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.322 19:15:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:20:41.322 19:15:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:20:41.891 19:15:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.152 19:15:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:42.152 19:15:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.152 19:15:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.152 19:15:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.152 19:15:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.152 19:15:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:42.152 19:15:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:42.152 19:15:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:42.152 19:15:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.152 19:15:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:42.152 19:15:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:42.152 19:15:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:42.152 19:15:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.152 19:15:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:42.152 19:15:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.152 19:15:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.152 19:15:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.152 19:15:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:42.152 19:15:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:42.152 19:15:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:42.722 00:20:42.722 19:15:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.722 19:15:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.722 19:15:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.722 19:15:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.722 19:15:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.722 19:15:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.722 19:15:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.722 19:15:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.722 19:15:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.722 { 00:20:42.722 "cntlid": 39, 00:20:42.722 "qid": 0, 00:20:42.722 "state": "enabled", 00:20:42.722 "thread": "nvmf_tgt_poll_group_000", 00:20:42.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:42.722 "listen_address": { 00:20:42.722 "trtype": "RDMA", 00:20:42.722 "adrfam": "IPv4", 00:20:42.722 "traddr": "192.168.100.8", 00:20:42.722 "trsvcid": "4420" 00:20:42.722 }, 00:20:42.722 "peer_address": { 00:20:42.722 "trtype": "RDMA", 00:20:42.722 "adrfam": "IPv4", 00:20:42.722 "traddr": "192.168.100.8", 00:20:42.722 "trsvcid": "45881" 00:20:42.722 }, 00:20:42.722 "auth": { 00:20:42.722 "state": "completed", 00:20:42.722 "digest": "sha256", 00:20:42.722 "dhgroup": "ffdhe6144" 00:20:42.722 } 00:20:42.722 } 00:20:42.722 ]' 00:20:42.722 19:15:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.982 19:15:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:42.982 19:15:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.982 19:15:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.982 19:15:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.982 19:15:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.982 19:15:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.982 19:15:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.241 19:15:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:20:43.242 19:15:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:20:43.811 19:15:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.811 19:15:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:43.811 19:15:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.811 19:15:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.812 19:15:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.812 19:15:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.812 19:15:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.812 19:15:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:43.812 19:15:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:44.072 19:15:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:44.072 19:15:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.072 19:15:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:44.072 19:15:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:44.072 19:15:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:44.072 19:15:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.072 19:15:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.072 19:15:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.072 19:15:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.072 19:15:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.072 19:15:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.072 19:15:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.072 19:15:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.642 00:20:44.642 19:15:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.642 19:15:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.642 19:15:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.903 19:15:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.903 19:15:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.903 19:15:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.903 19:15:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.903 19:15:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.903 19:15:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.903 { 00:20:44.903 "cntlid": 41, 00:20:44.903 "qid": 0, 00:20:44.903 "state": "enabled", 00:20:44.903 "thread": "nvmf_tgt_poll_group_000", 00:20:44.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:44.903 "listen_address": { 00:20:44.903 "trtype": "RDMA", 00:20:44.903 "adrfam": "IPv4", 00:20:44.903 "traddr": "192.168.100.8", 00:20:44.903 "trsvcid": "4420" 00:20:44.903 }, 00:20:44.903 "peer_address": { 00:20:44.903 "trtype": "RDMA", 00:20:44.903 "adrfam": "IPv4", 00:20:44.903 "traddr": "192.168.100.8", 00:20:44.903 "trsvcid": "54012" 00:20:44.903 }, 00:20:44.903 "auth": { 00:20:44.903 "state": "completed", 00:20:44.903 "digest": "sha256", 00:20:44.903 "dhgroup": "ffdhe8192" 00:20:44.903 } 00:20:44.903 } 00:20:44.903 ]' 00:20:44.903 19:15:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.903 19:15:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:44.903 19:15:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.903 19:15:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:44.903 19:15:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.903 19:15:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.903 19:15:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.903 19:15:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.162 19:15:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:20:45.162 19:15:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:20:45.731 19:15:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.992 19:15:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:45.992 19:15:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.992 19:15:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.992 19:15:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.992 19:15:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.992 19:15:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:45.992 19:15:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:45.992 19:15:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:45.992 19:15:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.992 19:15:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:45.992 19:15:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:45.992 19:15:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:45.992 19:15:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.992 19:15:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.992 19:15:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.992 19:15:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.992 19:15:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.992 19:15:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.992 19:15:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.992 19:15:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.561 00:20:46.561 19:15:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.561 19:15:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.561 19:15:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.820 19:15:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.820 19:15:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.820 19:15:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.820 19:15:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.820 19:15:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.820 19:15:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.820 { 00:20:46.820 "cntlid": 43, 00:20:46.820 "qid": 0, 00:20:46.820 "state": "enabled", 00:20:46.820 "thread": "nvmf_tgt_poll_group_000", 00:20:46.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:46.820 "listen_address": { 00:20:46.820 "trtype": "RDMA", 00:20:46.820 "adrfam": "IPv4", 00:20:46.820 "traddr": "192.168.100.8", 00:20:46.820 "trsvcid": "4420" 00:20:46.820 }, 00:20:46.820 "peer_address": { 00:20:46.820 "trtype": "RDMA", 00:20:46.820 "adrfam": "IPv4", 00:20:46.820 "traddr": "192.168.100.8", 00:20:46.820 "trsvcid": "59956" 00:20:46.820 }, 00:20:46.820 "auth": { 00:20:46.820 "state": "completed", 00:20:46.820 "digest": "sha256", 00:20:46.820 "dhgroup": "ffdhe8192" 00:20:46.820 } 00:20:46.820 } 00:20:46.820 ]' 00:20:46.820 19:15:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.820 19:15:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:46.820 19:15:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.820 19:15:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:46.820 19:15:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.820 19:15:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.820 19:15:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.820 19:15:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.080 19:15:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:20:47.080 19:15:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:20:47.650 19:15:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.910 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:47.910 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.910 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.910 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.910 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.910 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:47.910 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:48.170 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:48.170 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.170 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:48.170 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:48.170 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:48.170 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.170 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.170 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.170 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.170 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.170 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.170 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.170 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.430 00:20:48.430 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.430 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.430 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.690 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.690 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.690 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.690 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.690 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.690 19:15:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.690 { 00:20:48.690 "cntlid": 45, 00:20:48.690 "qid": 0, 00:20:48.690 "state": "enabled", 00:20:48.690 "thread": "nvmf_tgt_poll_group_000", 00:20:48.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:48.690 "listen_address": { 00:20:48.690 "trtype": "RDMA", 00:20:48.690 "adrfam": "IPv4", 00:20:48.690 "traddr": "192.168.100.8", 00:20:48.690 "trsvcid": "4420" 00:20:48.690 }, 00:20:48.690 "peer_address": { 00:20:48.690 "trtype": "RDMA", 00:20:48.690 "adrfam": "IPv4", 00:20:48.690 "traddr": "192.168.100.8", 00:20:48.690 "trsvcid": "42811" 00:20:48.690 }, 00:20:48.690 "auth": { 00:20:48.690 "state": "completed", 00:20:48.690 "digest": "sha256", 00:20:48.690 "dhgroup": "ffdhe8192" 00:20:48.690 } 00:20:48.690 } 00:20:48.690 ]' 00:20:48.690 19:15:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.690 19:15:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:48.690 19:15:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.950 19:15:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:48.950 19:15:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.950 19:15:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.950 19:15:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.950 19:15:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.210 19:15:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:20:49.210 19:15:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:20:49.780 19:15:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.780 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:49.780 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.780 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.780 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.780 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.780 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:49.780 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:50.040 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:50.040 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.040 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:50.040 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:50.040 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:50.040 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.040 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:50.040 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.040 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.040 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.040 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:50.040 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.040 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.610 00:20:50.610 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.610 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.610 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.610 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.610 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.610 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.610 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.610 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.610 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.610 { 00:20:50.610 "cntlid": 47, 00:20:50.610 "qid": 0, 00:20:50.610 "state": "enabled", 00:20:50.610 "thread": "nvmf_tgt_poll_group_000", 00:20:50.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:50.610 "listen_address": { 00:20:50.610 "trtype": "RDMA", 00:20:50.610 "adrfam": "IPv4", 00:20:50.610 "traddr": "192.168.100.8", 00:20:50.610 "trsvcid": "4420" 00:20:50.610 }, 00:20:50.610 "peer_address": { 00:20:50.610 "trtype": "RDMA", 00:20:50.610 "adrfam": "IPv4", 00:20:50.610 "traddr": "192.168.100.8", 00:20:50.610 "trsvcid": "39689" 00:20:50.610 }, 00:20:50.610 "auth": { 00:20:50.610 "state": "completed", 00:20:50.610 "digest": "sha256", 00:20:50.610 "dhgroup": "ffdhe8192" 00:20:50.610 } 00:20:50.610 } 00:20:50.610 ]' 00:20:50.610 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.878 19:15:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:50.878 19:15:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.878 19:15:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:50.878 19:15:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.878 19:15:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.878 19:15:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.878 19:15:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.138 19:15:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:20:51.138 19:15:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:20:51.707 19:15:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.707 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:51.707 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.707 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.707 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.707 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:51.707 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:51.707 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.707 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:51.707 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:51.967 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:51.967 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.967 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:51.967 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:51.967 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:51.967 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.967 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.967 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.967 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.967 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.967 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.967 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.967 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.226 00:20:52.226 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.226 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.226 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.486 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.486 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.486 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.486 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.486 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.486 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.486 { 00:20:52.486 "cntlid": 49, 00:20:52.486 "qid": 0, 00:20:52.486 "state": "enabled", 00:20:52.486 "thread": "nvmf_tgt_poll_group_000", 00:20:52.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:52.486 "listen_address": { 00:20:52.486 "trtype": "RDMA", 00:20:52.486 "adrfam": "IPv4", 00:20:52.486 "traddr": "192.168.100.8", 00:20:52.486 "trsvcid": "4420" 00:20:52.486 }, 00:20:52.486 "peer_address": { 00:20:52.486 "trtype": "RDMA", 00:20:52.486 "adrfam": "IPv4", 00:20:52.486 "traddr": "192.168.100.8", 00:20:52.486 "trsvcid": "53640" 00:20:52.486 }, 00:20:52.486 "auth": { 00:20:52.486 "state": "completed", 00:20:52.486 "digest": "sha384", 00:20:52.486 "dhgroup": "null" 00:20:52.486 } 00:20:52.486 } 00:20:52.486 ]' 00:20:52.486 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.486 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.486 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.486 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:52.486 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.746 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.746 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.746 19:15:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.746 19:15:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:20:52.746 19:15:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:20:53.315 19:15:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.575 19:15:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:53.575 19:15:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.575 19:15:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.575 19:15:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.575 19:15:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.575 19:15:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:53.575 19:15:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:53.835 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:53.835 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.835 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:53.835 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:53.835 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:53.835 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.835 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.835 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.835 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.835 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.835 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.835 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.835 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.095 00:20:54.095 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.095 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.095 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.355 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.355 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.355 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.355 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.355 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.355 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.355 { 00:20:54.355 "cntlid": 51, 00:20:54.355 "qid": 0, 00:20:54.355 "state": "enabled", 00:20:54.355 "thread": "nvmf_tgt_poll_group_000", 00:20:54.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:54.355 "listen_address": { 00:20:54.355 "trtype": "RDMA", 00:20:54.355 "adrfam": "IPv4", 00:20:54.355 "traddr": "192.168.100.8", 00:20:54.355 "trsvcid": "4420" 00:20:54.355 }, 00:20:54.355 "peer_address": { 00:20:54.355 "trtype": "RDMA", 00:20:54.355 "adrfam": "IPv4", 00:20:54.355 "traddr": "192.168.100.8", 00:20:54.355 "trsvcid": "59086" 00:20:54.355 }, 00:20:54.355 "auth": { 00:20:54.355 "state": "completed", 00:20:54.355 "digest": "sha384", 00:20:54.355 "dhgroup": "null" 00:20:54.355 } 00:20:54.355 } 00:20:54.355 ]' 00:20:54.355 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.355 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.355 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.355 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:54.355 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.355 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.355 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.355 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.615 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:20:54.616 19:15:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:20:55.185 19:15:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.445 19:15:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:55.445 19:15:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.445 19:15:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.445 19:15:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.445 19:15:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.445 19:15:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:55.445 19:15:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:55.445 19:15:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:55.445 19:15:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.445 19:15:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:55.445 19:15:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:55.445 19:15:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:55.445 19:15:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.445 19:15:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.445 19:15:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.445 19:15:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.445 19:15:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.445 19:15:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.445 19:15:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.445 19:15:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.706 00:20:55.706 19:15:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.706 19:15:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.706 19:15:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.966 19:15:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.966 19:15:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.966 19:15:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.966 19:15:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.966 19:15:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.966 19:15:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.966 { 00:20:55.966 "cntlid": 53, 00:20:55.966 "qid": 0, 00:20:55.966 "state": "enabled", 00:20:55.966 "thread": "nvmf_tgt_poll_group_000", 00:20:55.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:55.966 "listen_address": { 00:20:55.966 "trtype": "RDMA", 00:20:55.966 "adrfam": "IPv4", 00:20:55.966 "traddr": "192.168.100.8", 00:20:55.966 "trsvcid": "4420" 00:20:55.966 }, 00:20:55.966 "peer_address": { 00:20:55.966 "trtype": "RDMA", 00:20:55.966 "adrfam": "IPv4", 00:20:55.966 "traddr": "192.168.100.8", 00:20:55.966 "trsvcid": "42810" 00:20:55.966 }, 00:20:55.966 "auth": { 00:20:55.966 "state": "completed", 00:20:55.966 "digest": "sha384", 00:20:55.966 "dhgroup": "null" 00:20:55.966 } 00:20:55.966 } 00:20:55.966 ]' 00:20:55.966 19:15:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.966 19:15:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.966 19:15:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.226 19:15:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:56.226 19:15:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.226 19:15:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.226 19:15:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.226 19:15:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.486 19:15:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:20:56.486 19:15:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:20:57.056 19:15:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.056 19:15:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:57.056 19:15:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.056 19:15:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.056 19:15:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.056 19:15:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.056 19:15:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:57.056 19:15:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:57.316 19:15:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:57.316 19:15:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.316 19:15:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.316 19:15:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:57.316 19:15:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:57.316 19:15:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.316 19:15:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:57.316 19:15:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.316 19:15:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.316 19:15:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.316 19:15:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:57.316 19:15:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.316 19:15:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.576 00:20:57.576 19:15:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.576 19:15:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.576 19:15:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.836 19:15:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.836 19:15:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.836 19:15:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.836 19:15:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.836 19:15:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.836 19:15:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.836 { 00:20:57.836 "cntlid": 55, 00:20:57.836 "qid": 0, 00:20:57.836 "state": "enabled", 00:20:57.836 "thread": "nvmf_tgt_poll_group_000", 00:20:57.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:57.836 "listen_address": { 00:20:57.836 "trtype": "RDMA", 00:20:57.836 "adrfam": "IPv4", 00:20:57.836 "traddr": "192.168.100.8", 00:20:57.836 "trsvcid": "4420" 00:20:57.836 }, 00:20:57.836 "peer_address": { 00:20:57.836 "trtype": "RDMA", 00:20:57.836 "adrfam": "IPv4", 00:20:57.836 "traddr": "192.168.100.8", 00:20:57.836 "trsvcid": "34091" 00:20:57.836 }, 00:20:57.836 "auth": { 00:20:57.836 "state": "completed", 00:20:57.836 "digest": "sha384", 00:20:57.836 "dhgroup": "null" 00:20:57.836 } 00:20:57.836 } 00:20:57.836 ]' 00:20:57.836 19:15:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.836 19:15:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.836 19:15:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.836 19:15:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:57.836 19:15:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.836 19:15:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.836 19:15:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.836 19:15:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.096 19:15:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:20:58.096 19:15:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:20:58.665 19:15:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.925 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:58.925 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.925 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.925 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.925 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.925 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.925 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:58.925 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:58.925 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:58.925 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.925 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:58.925 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:58.925 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:58.925 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.925 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.925 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.925 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.925 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.925 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.925 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.926 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.185 00:20:59.185 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.185 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.185 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.446 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.446 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.446 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.446 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.446 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.446 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.446 { 00:20:59.446 "cntlid": 57, 00:20:59.446 "qid": 0, 00:20:59.446 "state": "enabled", 00:20:59.446 "thread": "nvmf_tgt_poll_group_000", 00:20:59.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:59.446 "listen_address": { 00:20:59.446 "trtype": "RDMA", 00:20:59.446 "adrfam": "IPv4", 00:20:59.446 "traddr": "192.168.100.8", 00:20:59.446 "trsvcid": "4420" 00:20:59.446 }, 00:20:59.446 "peer_address": { 00:20:59.446 "trtype": "RDMA", 00:20:59.446 "adrfam": "IPv4", 00:20:59.446 "traddr": "192.168.100.8", 00:20:59.446 "trsvcid": "43262" 00:20:59.446 }, 00:20:59.446 "auth": { 00:20:59.446 "state": "completed", 00:20:59.446 "digest": "sha384", 00:20:59.446 "dhgroup": "ffdhe2048" 00:20:59.446 } 00:20:59.446 } 00:20:59.446 ]' 00:20:59.446 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.446 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.446 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.705 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:59.705 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.705 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.705 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.705 19:15:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.705 19:15:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:20:59.705 19:15:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:21:00.645 19:15:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.645 19:15:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:00.645 19:15:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.645 19:15:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.645 19:15:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.645 19:15:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.645 19:15:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:00.645 19:15:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:00.645 19:15:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:21:00.645 19:15:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.645 19:15:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:00.645 19:15:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:00.645 19:15:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:00.645 19:15:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.645 19:15:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.645 19:15:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.645 19:15:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.645 19:15:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.645 19:15:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.645 19:15:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.645 19:15:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.910 00:21:00.910 19:15:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.910 19:15:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.910 19:15:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.169 19:15:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.169 19:15:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.169 19:15:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.169 19:15:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.169 19:15:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.169 19:15:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.169 { 00:21:01.169 "cntlid": 59, 00:21:01.169 "qid": 0, 00:21:01.169 "state": "enabled", 00:21:01.169 "thread": "nvmf_tgt_poll_group_000", 00:21:01.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:01.169 "listen_address": { 00:21:01.169 "trtype": "RDMA", 00:21:01.169 "adrfam": "IPv4", 00:21:01.169 "traddr": "192.168.100.8", 00:21:01.169 "trsvcid": "4420" 00:21:01.169 }, 00:21:01.169 "peer_address": { 00:21:01.169 "trtype": "RDMA", 00:21:01.169 "adrfam": "IPv4", 00:21:01.169 "traddr": "192.168.100.8", 00:21:01.169 "trsvcid": "45135" 00:21:01.169 }, 00:21:01.169 "auth": { 00:21:01.169 "state": "completed", 00:21:01.169 "digest": "sha384", 00:21:01.169 "dhgroup": "ffdhe2048" 00:21:01.169 } 00:21:01.169 } 00:21:01.169 ]' 00:21:01.169 19:15:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.169 19:15:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.169 19:15:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.429 19:15:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:01.429 19:15:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.429 19:15:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.429 19:15:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.429 19:15:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.688 19:15:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:21:01.688 19:15:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:21:02.258 19:15:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.258 19:15:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:02.258 19:15:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.258 19:15:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.258 19:15:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.258 19:15:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.258 19:15:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:02.258 19:15:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:02.518 19:15:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:21:02.518 19:15:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.518 19:15:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:02.518 19:15:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:02.518 19:15:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:02.518 19:15:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.518 19:15:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.518 19:15:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.518 19:15:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.518 19:15:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.518 19:15:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.518 19:15:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.518 19:15:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.778 00:21:02.778 19:15:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.778 19:15:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.778 19:15:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.038 19:15:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.038 19:15:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.038 19:15:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.038 19:15:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.038 19:15:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.038 19:15:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.038 { 00:21:03.038 "cntlid": 61, 00:21:03.038 "qid": 0, 00:21:03.038 "state": "enabled", 00:21:03.038 "thread": "nvmf_tgt_poll_group_000", 00:21:03.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:03.038 "listen_address": { 00:21:03.038 "trtype": "RDMA", 00:21:03.038 "adrfam": "IPv4", 00:21:03.038 "traddr": "192.168.100.8", 00:21:03.038 "trsvcid": "4420" 00:21:03.038 }, 00:21:03.038 "peer_address": { 00:21:03.038 "trtype": "RDMA", 00:21:03.038 "adrfam": "IPv4", 00:21:03.038 "traddr": "192.168.100.8", 00:21:03.038 "trsvcid": "45110" 00:21:03.038 }, 00:21:03.038 "auth": { 00:21:03.038 "state": "completed", 00:21:03.038 "digest": "sha384", 00:21:03.038 "dhgroup": "ffdhe2048" 00:21:03.038 } 00:21:03.038 } 00:21:03.038 ]' 00:21:03.038 19:15:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.038 19:15:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.038 19:15:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.038 19:15:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:03.038 19:15:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.038 19:15:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.038 19:15:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.038 19:15:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.297 19:15:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:21:03.297 19:15:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:21:03.867 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.126 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:04.126 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.126 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.126 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.126 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.126 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:04.126 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:04.126 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:21:04.126 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.126 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:04.126 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:04.126 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:04.126 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.126 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:04.126 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.126 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.126 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.126 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:04.126 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.127 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.386 00:21:04.386 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.386 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.386 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.646 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.646 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.646 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.646 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.646 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.646 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.646 { 00:21:04.646 "cntlid": 63, 00:21:04.646 "qid": 0, 00:21:04.646 "state": "enabled", 00:21:04.646 "thread": "nvmf_tgt_poll_group_000", 00:21:04.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:04.646 "listen_address": { 00:21:04.646 "trtype": "RDMA", 00:21:04.646 "adrfam": "IPv4", 00:21:04.646 "traddr": "192.168.100.8", 00:21:04.646 "trsvcid": "4420" 00:21:04.646 }, 00:21:04.646 "peer_address": { 00:21:04.646 "trtype": "RDMA", 00:21:04.646 "adrfam": "IPv4", 00:21:04.646 "traddr": "192.168.100.8", 00:21:04.646 "trsvcid": "56813" 00:21:04.646 }, 00:21:04.646 "auth": { 00:21:04.646 "state": "completed", 00:21:04.646 "digest": "sha384", 00:21:04.646 "dhgroup": "ffdhe2048" 00:21:04.646 } 00:21:04.646 } 00:21:04.646 ]' 00:21:04.646 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.646 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.646 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.646 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:04.646 19:15:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.906 19:15:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.906 19:15:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.906 19:15:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.906 19:15:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:21:04.906 19:15:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:21:05.847 19:15:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.847 19:15:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:05.847 19:15:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.847 19:15:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.847 19:15:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.847 19:15:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.847 19:15:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.847 19:15:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:05.847 19:15:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:05.847 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:05.847 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.847 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:05.847 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:05.847 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:05.847 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.847 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.847 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.847 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.847 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.847 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.847 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.847 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.107 00:21:06.107 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.107 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.107 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.367 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.367 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.367 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.367 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.367 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.367 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.367 { 00:21:06.367 "cntlid": 65, 00:21:06.367 "qid": 0, 00:21:06.367 "state": "enabled", 00:21:06.367 "thread": "nvmf_tgt_poll_group_000", 00:21:06.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:06.367 "listen_address": { 00:21:06.367 "trtype": "RDMA", 00:21:06.367 "adrfam": "IPv4", 00:21:06.367 "traddr": "192.168.100.8", 00:21:06.367 "trsvcid": "4420" 00:21:06.367 }, 00:21:06.367 "peer_address": { 00:21:06.367 "trtype": "RDMA", 00:21:06.367 "adrfam": "IPv4", 00:21:06.367 "traddr": "192.168.100.8", 00:21:06.367 "trsvcid": "52432" 00:21:06.367 }, 00:21:06.367 "auth": { 00:21:06.367 "state": "completed", 00:21:06.367 "digest": "sha384", 00:21:06.367 "dhgroup": "ffdhe3072" 00:21:06.367 } 00:21:06.367 } 00:21:06.367 ]' 00:21:06.367 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.367 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.367 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.627 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:06.627 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.627 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.627 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.627 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.627 19:15:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:21:06.627 19:15:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:21:07.567 19:15:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.567 19:15:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:07.567 19:15:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.567 19:15:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.567 19:15:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.567 19:15:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.567 19:15:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:07.567 19:15:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:07.567 19:15:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:07.567 19:15:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.567 19:15:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:07.567 19:15:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:07.567 19:15:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:07.567 19:15:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.567 19:15:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.567 19:15:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.567 19:15:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.567 19:15:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.827 19:15:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.827 19:15:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.827 19:15:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.827 00:21:08.087 19:15:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.087 19:15:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.087 19:15:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.087 19:15:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.087 19:15:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.087 19:15:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.087 19:15:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.087 19:15:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.087 19:15:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.087 { 00:21:08.087 "cntlid": 67, 00:21:08.087 "qid": 0, 00:21:08.087 "state": "enabled", 00:21:08.087 "thread": "nvmf_tgt_poll_group_000", 00:21:08.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:08.087 "listen_address": { 00:21:08.087 "trtype": "RDMA", 00:21:08.087 "adrfam": "IPv4", 00:21:08.087 "traddr": "192.168.100.8", 00:21:08.087 "trsvcid": "4420" 00:21:08.087 }, 00:21:08.087 "peer_address": { 00:21:08.087 "trtype": "RDMA", 00:21:08.087 "adrfam": "IPv4", 00:21:08.087 "traddr": "192.168.100.8", 00:21:08.087 "trsvcid": "43284" 00:21:08.087 }, 00:21:08.088 "auth": { 00:21:08.088 "state": "completed", 00:21:08.088 "digest": "sha384", 00:21:08.088 "dhgroup": "ffdhe3072" 00:21:08.088 } 00:21:08.088 } 00:21:08.088 ]' 00:21:08.088 19:15:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.088 19:15:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.088 19:15:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.348 19:15:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:08.348 19:15:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.348 19:15:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.348 19:15:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.348 19:15:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.348 19:15:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:21:08.348 19:15:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:21:09.288 19:15:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.288 19:15:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:09.288 19:15:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.288 19:15:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.288 19:15:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.288 19:15:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.288 19:15:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:09.288 19:15:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:09.288 19:15:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:09.288 19:15:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.288 19:15:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:09.288 19:15:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:09.288 19:15:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:09.288 19:15:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.289 19:15:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.289 19:15:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.289 19:15:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.289 19:15:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.289 19:15:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.289 19:15:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.289 19:15:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.548 00:21:09.808 19:15:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.809 19:15:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.809 19:15:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.809 19:15:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.809 19:15:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.809 19:15:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.809 19:15:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.809 19:15:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.809 19:15:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.809 { 00:21:09.809 "cntlid": 69, 00:21:09.809 "qid": 0, 00:21:09.809 "state": "enabled", 00:21:09.809 "thread": "nvmf_tgt_poll_group_000", 00:21:09.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:09.809 "listen_address": { 00:21:09.809 "trtype": "RDMA", 00:21:09.809 "adrfam": "IPv4", 00:21:09.809 "traddr": "192.168.100.8", 00:21:09.809 "trsvcid": "4420" 00:21:09.809 }, 00:21:09.809 "peer_address": { 00:21:09.809 "trtype": "RDMA", 00:21:09.809 "adrfam": "IPv4", 00:21:09.809 "traddr": "192.168.100.8", 00:21:09.809 "trsvcid": "48411" 00:21:09.809 }, 00:21:09.809 "auth": { 00:21:09.809 "state": "completed", 00:21:09.809 "digest": "sha384", 00:21:09.809 "dhgroup": "ffdhe3072" 00:21:09.809 } 00:21:09.809 } 00:21:09.809 ]' 00:21:09.809 19:15:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.809 19:15:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.809 19:15:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.068 19:15:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:10.068 19:15:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.068 19:15:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.068 19:15:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.068 19:15:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.328 19:15:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:21:10.328 19:15:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:21:10.903 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.903 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:10.903 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.903 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.903 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.903 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.903 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:10.903 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:11.162 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:11.162 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.162 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:11.162 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:11.162 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:11.162 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.162 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:11.162 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.162 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.162 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.162 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:11.162 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.162 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.421 00:21:11.421 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.421 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.421 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.681 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.681 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.681 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.681 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.681 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.681 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.681 { 00:21:11.681 "cntlid": 71, 00:21:11.681 "qid": 0, 00:21:11.681 "state": "enabled", 00:21:11.681 "thread": "nvmf_tgt_poll_group_000", 00:21:11.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:11.681 "listen_address": { 00:21:11.681 "trtype": "RDMA", 00:21:11.681 "adrfam": "IPv4", 00:21:11.681 "traddr": "192.168.100.8", 00:21:11.681 "trsvcid": "4420" 00:21:11.681 }, 00:21:11.681 "peer_address": { 00:21:11.681 "trtype": "RDMA", 00:21:11.681 "adrfam": "IPv4", 00:21:11.681 "traddr": "192.168.100.8", 00:21:11.681 "trsvcid": "37202" 00:21:11.681 }, 00:21:11.681 "auth": { 00:21:11.681 "state": "completed", 00:21:11.681 "digest": "sha384", 00:21:11.681 "dhgroup": "ffdhe3072" 00:21:11.681 } 00:21:11.681 } 00:21:11.681 ]' 00:21:11.681 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.681 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.681 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.681 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:11.681 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.681 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.681 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.682 19:15:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.941 19:15:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:21:11.941 19:15:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:21:12.511 19:15:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.772 19:15:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:12.772 19:15:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.772 19:15:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.772 19:15:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.772 19:15:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.772 19:15:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.772 19:15:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:12.772 19:15:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:12.772 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:12.772 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.772 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:12.772 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:12.772 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:12.772 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.772 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.772 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.772 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.772 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.772 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.772 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.772 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.032 00:21:13.032 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.032 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.032 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.296 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.296 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.296 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.296 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.296 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.296 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.296 { 00:21:13.296 "cntlid": 73, 00:21:13.296 "qid": 0, 00:21:13.296 "state": "enabled", 00:21:13.296 "thread": "nvmf_tgt_poll_group_000", 00:21:13.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:13.296 "listen_address": { 00:21:13.296 "trtype": "RDMA", 00:21:13.296 "adrfam": "IPv4", 00:21:13.296 "traddr": "192.168.100.8", 00:21:13.296 "trsvcid": "4420" 00:21:13.296 }, 00:21:13.296 "peer_address": { 00:21:13.296 "trtype": "RDMA", 00:21:13.296 "adrfam": "IPv4", 00:21:13.296 "traddr": "192.168.100.8", 00:21:13.296 "trsvcid": "37007" 00:21:13.296 }, 00:21:13.296 "auth": { 00:21:13.296 "state": "completed", 00:21:13.296 "digest": "sha384", 00:21:13.296 "dhgroup": "ffdhe4096" 00:21:13.296 } 00:21:13.296 } 00:21:13.296 ]' 00:21:13.296 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.296 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.296 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.556 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:13.556 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.556 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.556 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.556 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.816 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:21:13.816 19:15:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:21:14.386 19:15:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.386 19:15:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:14.386 19:15:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.386 19:15:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.386 19:15:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.386 19:15:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.386 19:15:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:14.386 19:15:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:14.646 19:15:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:14.646 19:15:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.646 19:15:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:14.646 19:15:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:14.646 19:15:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:14.646 19:15:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.646 19:15:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.646 19:15:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.646 19:15:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.646 19:15:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.646 19:15:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.646 19:15:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.646 19:15:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.906 00:21:14.906 19:15:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.906 19:15:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.906 19:15:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.166 19:15:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.166 19:15:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.166 19:15:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.166 19:15:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.166 19:15:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.166 19:15:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.166 { 00:21:15.166 "cntlid": 75, 00:21:15.166 "qid": 0, 00:21:15.166 "state": "enabled", 00:21:15.166 "thread": "nvmf_tgt_poll_group_000", 00:21:15.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:15.166 "listen_address": { 00:21:15.166 "trtype": "RDMA", 00:21:15.166 "adrfam": "IPv4", 00:21:15.166 "traddr": "192.168.100.8", 00:21:15.166 "trsvcid": "4420" 00:21:15.166 }, 00:21:15.166 "peer_address": { 00:21:15.166 "trtype": "RDMA", 00:21:15.166 "adrfam": "IPv4", 00:21:15.166 "traddr": "192.168.100.8", 00:21:15.166 "trsvcid": "54815" 00:21:15.166 }, 00:21:15.166 "auth": { 00:21:15.166 "state": "completed", 00:21:15.166 "digest": "sha384", 00:21:15.166 "dhgroup": "ffdhe4096" 00:21:15.166 } 00:21:15.166 } 00:21:15.166 ]' 00:21:15.166 19:15:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.166 19:15:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.166 19:15:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.166 19:15:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:15.166 19:15:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.166 19:15:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.166 19:15:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.166 19:15:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.426 19:15:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:21:15.426 19:15:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:21:15.996 19:15:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.256 19:15:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:16.256 19:15:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.256 19:15:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.256 19:15:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.256 19:15:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.256 19:15:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:16.256 19:15:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:16.515 19:15:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:16.516 19:15:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.516 19:15:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:16.516 19:15:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:16.516 19:15:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:16.516 19:15:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.516 19:15:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.516 19:15:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.516 19:15:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.516 19:15:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.516 19:15:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.516 19:15:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.516 19:15:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.776 00:21:16.776 19:15:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.776 19:15:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.776 19:15:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.035 19:15:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.035 19:15:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.035 19:15:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.035 19:15:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.035 19:15:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.035 19:15:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.035 { 00:21:17.035 "cntlid": 77, 00:21:17.035 "qid": 0, 00:21:17.035 "state": "enabled", 00:21:17.035 "thread": "nvmf_tgt_poll_group_000", 00:21:17.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:17.035 "listen_address": { 00:21:17.035 "trtype": "RDMA", 00:21:17.035 "adrfam": "IPv4", 00:21:17.036 "traddr": "192.168.100.8", 00:21:17.036 "trsvcid": "4420" 00:21:17.036 }, 00:21:17.036 "peer_address": { 00:21:17.036 "trtype": "RDMA", 00:21:17.036 "adrfam": "IPv4", 00:21:17.036 "traddr": "192.168.100.8", 00:21:17.036 "trsvcid": "56002" 00:21:17.036 }, 00:21:17.036 "auth": { 00:21:17.036 "state": "completed", 00:21:17.036 "digest": "sha384", 00:21:17.036 "dhgroup": "ffdhe4096" 00:21:17.036 } 00:21:17.036 } 00:21:17.036 ]' 00:21:17.036 19:15:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.036 19:15:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.036 19:15:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.036 19:15:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:17.036 19:15:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.036 19:15:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.036 19:15:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.036 19:15:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.295 19:15:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:21:17.295 19:15:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:21:17.865 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.865 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:17.865 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.865 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.125 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.125 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.125 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:18.125 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:18.125 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:18.125 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.125 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:18.125 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:18.125 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:18.125 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.125 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:18.125 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.125 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.125 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.125 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:18.125 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:18.125 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:18.384 00:21:18.384 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.384 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.384 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.644 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.644 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.644 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.644 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.644 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.644 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.644 { 00:21:18.644 "cntlid": 79, 00:21:18.644 "qid": 0, 00:21:18.644 "state": "enabled", 00:21:18.644 "thread": "nvmf_tgt_poll_group_000", 00:21:18.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:18.644 "listen_address": { 00:21:18.644 "trtype": "RDMA", 00:21:18.644 "adrfam": "IPv4", 00:21:18.644 "traddr": "192.168.100.8", 00:21:18.644 "trsvcid": "4420" 00:21:18.644 }, 00:21:18.644 "peer_address": { 00:21:18.644 "trtype": "RDMA", 00:21:18.644 "adrfam": "IPv4", 00:21:18.644 "traddr": "192.168.100.8", 00:21:18.644 "trsvcid": "46075" 00:21:18.644 }, 00:21:18.644 "auth": { 00:21:18.644 "state": "completed", 00:21:18.644 "digest": "sha384", 00:21:18.644 "dhgroup": "ffdhe4096" 00:21:18.644 } 00:21:18.644 } 00:21:18.644 ]' 00:21:18.644 19:15:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.644 19:15:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.644 19:15:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.903 19:15:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:18.903 19:15:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.903 19:15:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.903 19:15:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.904 19:15:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.163 19:15:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:21:19.163 19:15:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:21:19.733 19:15:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.733 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:19.733 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.733 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.733 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.733 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.733 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.733 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:19.733 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:19.993 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:19.993 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.993 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:19.993 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:19.993 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:19.993 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.993 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.993 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.993 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.993 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.993 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.993 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.993 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.253 00:21:20.253 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.253 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.253 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.513 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.513 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.513 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.513 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.513 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.513 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.513 { 00:21:20.513 "cntlid": 81, 00:21:20.513 "qid": 0, 00:21:20.513 "state": "enabled", 00:21:20.513 "thread": "nvmf_tgt_poll_group_000", 00:21:20.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:20.513 "listen_address": { 00:21:20.513 "trtype": "RDMA", 00:21:20.513 "adrfam": "IPv4", 00:21:20.513 "traddr": "192.168.100.8", 00:21:20.513 "trsvcid": "4420" 00:21:20.513 }, 00:21:20.513 "peer_address": { 00:21:20.513 "trtype": "RDMA", 00:21:20.513 "adrfam": "IPv4", 00:21:20.513 "traddr": "192.168.100.8", 00:21:20.513 "trsvcid": "45306" 00:21:20.513 }, 00:21:20.513 "auth": { 00:21:20.513 "state": "completed", 00:21:20.513 "digest": "sha384", 00:21:20.513 "dhgroup": "ffdhe6144" 00:21:20.513 } 00:21:20.513 } 00:21:20.513 ]' 00:21:20.513 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.513 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:20.513 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.513 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:20.513 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.778 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.779 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.779 19:15:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.779 19:15:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:21:20.779 19:15:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:21:21.348 19:15:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.608 19:15:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:21.608 19:15:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.608 19:15:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.608 19:15:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.608 19:15:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.608 19:15:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:21.608 19:15:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:21.868 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:21.868 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.869 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:21.869 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:21.869 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:21.869 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.869 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.869 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.869 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.869 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.869 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.869 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.869 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.129 00:21:22.129 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.129 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.129 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.389 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.389 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.389 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.389 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.389 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.389 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.389 { 00:21:22.389 "cntlid": 83, 00:21:22.389 "qid": 0, 00:21:22.389 "state": "enabled", 00:21:22.389 "thread": "nvmf_tgt_poll_group_000", 00:21:22.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:22.389 "listen_address": { 00:21:22.389 "trtype": "RDMA", 00:21:22.389 "adrfam": "IPv4", 00:21:22.389 "traddr": "192.168.100.8", 00:21:22.389 "trsvcid": "4420" 00:21:22.389 }, 00:21:22.389 "peer_address": { 00:21:22.389 "trtype": "RDMA", 00:21:22.389 "adrfam": "IPv4", 00:21:22.389 "traddr": "192.168.100.8", 00:21:22.389 "trsvcid": "34016" 00:21:22.389 }, 00:21:22.389 "auth": { 00:21:22.389 "state": "completed", 00:21:22.389 "digest": "sha384", 00:21:22.389 "dhgroup": "ffdhe6144" 00:21:22.389 } 00:21:22.389 } 00:21:22.389 ]' 00:21:22.389 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.389 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.389 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.389 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:22.389 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.389 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.389 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.389 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.649 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:21:22.649 19:15:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:21:23.218 19:15:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.478 19:15:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:23.478 19:15:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.478 19:15:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.479 19:15:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.479 19:15:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.479 19:15:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:23.479 19:15:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:23.739 19:15:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:23.739 19:15:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.739 19:15:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:23.739 19:15:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:23.739 19:15:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:23.739 19:15:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.739 19:15:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.739 19:15:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.739 19:15:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.739 19:15:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.739 19:15:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.739 19:15:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.739 19:15:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.999 00:21:23.999 19:15:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.999 19:15:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.999 19:15:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.259 19:15:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.259 19:15:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.259 19:15:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.259 19:15:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.259 19:15:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.259 19:15:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.259 { 00:21:24.259 "cntlid": 85, 00:21:24.259 "qid": 0, 00:21:24.259 "state": "enabled", 00:21:24.259 "thread": "nvmf_tgt_poll_group_000", 00:21:24.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:24.259 "listen_address": { 00:21:24.259 "trtype": "RDMA", 00:21:24.259 "adrfam": "IPv4", 00:21:24.259 "traddr": "192.168.100.8", 00:21:24.259 "trsvcid": "4420" 00:21:24.259 }, 00:21:24.259 "peer_address": { 00:21:24.259 "trtype": "RDMA", 00:21:24.259 "adrfam": "IPv4", 00:21:24.259 "traddr": "192.168.100.8", 00:21:24.259 "trsvcid": "34860" 00:21:24.259 }, 00:21:24.259 "auth": { 00:21:24.259 "state": "completed", 00:21:24.259 "digest": "sha384", 00:21:24.259 "dhgroup": "ffdhe6144" 00:21:24.259 } 00:21:24.259 } 00:21:24.259 ]' 00:21:24.259 19:15:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.259 19:15:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:24.259 19:15:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.259 19:15:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:24.259 19:15:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.259 19:15:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.259 19:15:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.259 19:15:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.519 19:15:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:21:24.519 19:15:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:21:25.088 19:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.348 19:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:25.348 19:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.348 19:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.348 19:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.348 19:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.348 19:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:25.348 19:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:25.348 19:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:25.348 19:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.348 19:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:25.348 19:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:25.348 19:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:25.348 19:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.348 19:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:25.348 19:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.348 19:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.348 19:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.348 19:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:25.348 19:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.348 19:15:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.919 00:21:25.919 19:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.919 19:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.919 19:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.919 19:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.919 19:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.919 19:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.919 19:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.919 19:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.919 19:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.919 { 00:21:25.919 "cntlid": 87, 00:21:25.919 "qid": 0, 00:21:25.919 "state": "enabled", 00:21:25.919 "thread": "nvmf_tgt_poll_group_000", 00:21:25.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:25.919 "listen_address": { 00:21:25.919 "trtype": "RDMA", 00:21:25.919 "adrfam": "IPv4", 00:21:25.919 "traddr": "192.168.100.8", 00:21:25.919 "trsvcid": "4420" 00:21:25.919 }, 00:21:25.919 "peer_address": { 00:21:25.919 "trtype": "RDMA", 00:21:25.919 "adrfam": "IPv4", 00:21:25.919 "traddr": "192.168.100.8", 00:21:25.919 "trsvcid": "54799" 00:21:25.919 }, 00:21:25.919 "auth": { 00:21:25.919 "state": "completed", 00:21:25.919 "digest": "sha384", 00:21:25.919 "dhgroup": "ffdhe6144" 00:21:25.919 } 00:21:25.919 } 00:21:25.919 ]' 00:21:25.919 19:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.179 19:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:26.179 19:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.179 19:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:26.179 19:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.179 19:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.179 19:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.179 19:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.439 19:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:21:26.439 19:16:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:21:27.008 19:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.008 19:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:27.008 19:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.008 19:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.008 19:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.008 19:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:27.008 19:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.009 19:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:27.009 19:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:27.268 19:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:27.268 19:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.268 19:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:27.269 19:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:27.269 19:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:27.269 19:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.269 19:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.269 19:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.269 19:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.269 19:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.269 19:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.269 19:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.269 19:16:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.839 00:21:27.839 19:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.839 19:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.839 19:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.839 19:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.839 19:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.839 19:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.839 19:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.098 19:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.098 19:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.098 { 00:21:28.098 "cntlid": 89, 00:21:28.098 "qid": 0, 00:21:28.098 "state": "enabled", 00:21:28.098 "thread": "nvmf_tgt_poll_group_000", 00:21:28.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:28.098 "listen_address": { 00:21:28.098 "trtype": "RDMA", 00:21:28.098 "adrfam": "IPv4", 00:21:28.098 "traddr": "192.168.100.8", 00:21:28.098 "trsvcid": "4420" 00:21:28.098 }, 00:21:28.098 "peer_address": { 00:21:28.098 "trtype": "RDMA", 00:21:28.098 "adrfam": "IPv4", 00:21:28.098 "traddr": "192.168.100.8", 00:21:28.098 "trsvcid": "52961" 00:21:28.098 }, 00:21:28.098 "auth": { 00:21:28.098 "state": "completed", 00:21:28.098 "digest": "sha384", 00:21:28.098 "dhgroup": "ffdhe8192" 00:21:28.098 } 00:21:28.098 } 00:21:28.098 ]' 00:21:28.098 19:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.098 19:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:28.098 19:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.098 19:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:28.098 19:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.098 19:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.098 19:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.098 19:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.358 19:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:21:28.358 19:16:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:21:28.929 19:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.929 19:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:28.929 19:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.929 19:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.929 19:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.929 19:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.929 19:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:28.929 19:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:29.189 19:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:29.189 19:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.189 19:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:29.189 19:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:29.189 19:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:29.189 19:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.189 19:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.189 19:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.189 19:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.189 19:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.189 19:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.189 19:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.189 19:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.759 00:21:29.759 19:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.759 19:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.759 19:16:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.020 19:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.020 19:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.020 19:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.020 19:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.020 19:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.020 19:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.020 { 00:21:30.020 "cntlid": 91, 00:21:30.020 "qid": 0, 00:21:30.020 "state": "enabled", 00:21:30.020 "thread": "nvmf_tgt_poll_group_000", 00:21:30.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:30.020 "listen_address": { 00:21:30.020 "trtype": "RDMA", 00:21:30.020 "adrfam": "IPv4", 00:21:30.020 "traddr": "192.168.100.8", 00:21:30.020 "trsvcid": "4420" 00:21:30.020 }, 00:21:30.020 "peer_address": { 00:21:30.020 "trtype": "RDMA", 00:21:30.020 "adrfam": "IPv4", 00:21:30.020 "traddr": "192.168.100.8", 00:21:30.020 "trsvcid": "54422" 00:21:30.020 }, 00:21:30.020 "auth": { 00:21:30.020 "state": "completed", 00:21:30.020 "digest": "sha384", 00:21:30.020 "dhgroup": "ffdhe8192" 00:21:30.020 } 00:21:30.020 } 00:21:30.020 ]' 00:21:30.020 19:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.020 19:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:30.020 19:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.020 19:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:30.020 19:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.020 19:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.020 19:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.020 19:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.281 19:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:21:30.281 19:16:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:21:30.851 19:16:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.110 19:16:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:31.110 19:16:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.110 19:16:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.110 19:16:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.110 19:16:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.110 19:16:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:31.110 19:16:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:31.110 19:16:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:31.110 19:16:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.110 19:16:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:31.110 19:16:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:31.110 19:16:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:31.110 19:16:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.110 19:16:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.110 19:16:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.110 19:16:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.110 19:16:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.110 19:16:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.110 19:16:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.110 19:16:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.680 00:21:31.680 19:16:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.680 19:16:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.680 19:16:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.940 19:16:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.940 19:16:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.940 19:16:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.940 19:16:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.940 19:16:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.940 19:16:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.940 { 00:21:31.940 "cntlid": 93, 00:21:31.940 "qid": 0, 00:21:31.940 "state": "enabled", 00:21:31.940 "thread": "nvmf_tgt_poll_group_000", 00:21:31.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:31.940 "listen_address": { 00:21:31.940 "trtype": "RDMA", 00:21:31.940 "adrfam": "IPv4", 00:21:31.940 "traddr": "192.168.100.8", 00:21:31.940 "trsvcid": "4420" 00:21:31.940 }, 00:21:31.940 "peer_address": { 00:21:31.940 "trtype": "RDMA", 00:21:31.940 "adrfam": "IPv4", 00:21:31.940 "traddr": "192.168.100.8", 00:21:31.940 "trsvcid": "45056" 00:21:31.940 }, 00:21:31.940 "auth": { 00:21:31.940 "state": "completed", 00:21:31.940 "digest": "sha384", 00:21:31.940 "dhgroup": "ffdhe8192" 00:21:31.940 } 00:21:31.940 } 00:21:31.940 ]' 00:21:31.940 19:16:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.940 19:16:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.940 19:16:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.940 19:16:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:31.940 19:16:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.940 19:16:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.940 19:16:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.940 19:16:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.200 19:16:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:21:32.200 19:16:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:21:32.770 19:16:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.029 19:16:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:33.029 19:16:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.029 19:16:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.029 19:16:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.029 19:16:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.029 19:16:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:33.029 19:16:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:33.289 19:16:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:33.289 19:16:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.289 19:16:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:33.289 19:16:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:33.289 19:16:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:33.289 19:16:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.289 19:16:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:33.289 19:16:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.289 19:16:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.289 19:16:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.289 19:16:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:33.289 19:16:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.289 19:16:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.549 00:21:33.549 19:16:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.549 19:16:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.549 19:16:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.809 19:16:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.809 19:16:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.809 19:16:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.809 19:16:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.809 19:16:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.809 19:16:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.809 { 00:21:33.809 "cntlid": 95, 00:21:33.809 "qid": 0, 00:21:33.809 "state": "enabled", 00:21:33.809 "thread": "nvmf_tgt_poll_group_000", 00:21:33.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:33.809 "listen_address": { 00:21:33.809 "trtype": "RDMA", 00:21:33.809 "adrfam": "IPv4", 00:21:33.809 "traddr": "192.168.100.8", 00:21:33.809 "trsvcid": "4420" 00:21:33.809 }, 00:21:33.809 "peer_address": { 00:21:33.809 "trtype": "RDMA", 00:21:33.809 "adrfam": "IPv4", 00:21:33.809 "traddr": "192.168.100.8", 00:21:33.809 "trsvcid": "34368" 00:21:33.809 }, 00:21:33.809 "auth": { 00:21:33.809 "state": "completed", 00:21:33.809 "digest": "sha384", 00:21:33.809 "dhgroup": "ffdhe8192" 00:21:33.809 } 00:21:33.809 } 00:21:33.809 ]' 00:21:33.809 19:16:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.809 19:16:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:33.809 19:16:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.069 19:16:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:34.069 19:16:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.069 19:16:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.069 19:16:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.069 19:16:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.329 19:16:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:21:34.329 19:16:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:21:34.899 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.899 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:34.899 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.899 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.899 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.899 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:34.899 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:34.899 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.899 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:34.899 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:35.159 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:35.159 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.159 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.159 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:35.159 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:35.159 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.159 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.159 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.159 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.159 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.159 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.159 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.159 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.419 00:21:35.419 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.419 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.419 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.679 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.679 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.679 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.679 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.679 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.679 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.679 { 00:21:35.679 "cntlid": 97, 00:21:35.679 "qid": 0, 00:21:35.680 "state": "enabled", 00:21:35.680 "thread": "nvmf_tgt_poll_group_000", 00:21:35.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:35.680 "listen_address": { 00:21:35.680 "trtype": "RDMA", 00:21:35.680 "adrfam": "IPv4", 00:21:35.680 "traddr": "192.168.100.8", 00:21:35.680 "trsvcid": "4420" 00:21:35.680 }, 00:21:35.680 "peer_address": { 00:21:35.680 "trtype": "RDMA", 00:21:35.680 "adrfam": "IPv4", 00:21:35.680 "traddr": "192.168.100.8", 00:21:35.680 "trsvcid": "34927" 00:21:35.680 }, 00:21:35.680 "auth": { 00:21:35.680 "state": "completed", 00:21:35.680 "digest": "sha512", 00:21:35.680 "dhgroup": "null" 00:21:35.680 } 00:21:35.680 } 00:21:35.680 ]' 00:21:35.680 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.680 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.680 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.680 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:35.680 19:16:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.680 19:16:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.680 19:16:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.680 19:16:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.939 19:16:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:21:35.939 19:16:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:21:36.509 19:16:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.769 19:16:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:36.769 19:16:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.769 19:16:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.769 19:16:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.769 19:16:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.769 19:16:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:36.769 19:16:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:37.029 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:37.029 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.029 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:37.029 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:37.029 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:37.029 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.029 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.029 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.029 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.029 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.029 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.029 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.029 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.289 00:21:37.289 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.289 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.289 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.289 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.289 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.289 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.289 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.289 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.289 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.289 { 00:21:37.289 "cntlid": 99, 00:21:37.289 "qid": 0, 00:21:37.289 "state": "enabled", 00:21:37.289 "thread": "nvmf_tgt_poll_group_000", 00:21:37.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:37.289 "listen_address": { 00:21:37.289 "trtype": "RDMA", 00:21:37.289 "adrfam": "IPv4", 00:21:37.289 "traddr": "192.168.100.8", 00:21:37.289 "trsvcid": "4420" 00:21:37.289 }, 00:21:37.289 "peer_address": { 00:21:37.289 "trtype": "RDMA", 00:21:37.289 "adrfam": "IPv4", 00:21:37.289 "traddr": "192.168.100.8", 00:21:37.289 "trsvcid": "58894" 00:21:37.289 }, 00:21:37.289 "auth": { 00:21:37.289 "state": "completed", 00:21:37.289 "digest": "sha512", 00:21:37.289 "dhgroup": "null" 00:21:37.289 } 00:21:37.289 } 00:21:37.289 ]' 00:21:37.289 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.548 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.548 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.548 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:37.548 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.548 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.548 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.548 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.808 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:21:37.808 19:16:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:21:38.377 19:16:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.377 19:16:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:38.377 19:16:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.377 19:16:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.377 19:16:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.377 19:16:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.377 19:16:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:38.377 19:16:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:38.637 19:16:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:38.637 19:16:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.637 19:16:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:38.637 19:16:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:38.637 19:16:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:38.637 19:16:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.637 19:16:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.637 19:16:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.637 19:16:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.637 19:16:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.637 19:16:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.637 19:16:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.637 19:16:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.897 00:21:38.897 19:16:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.897 19:16:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.897 19:16:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.157 19:16:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.157 19:16:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.157 19:16:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.157 19:16:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.157 19:16:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.157 19:16:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.157 { 00:21:39.157 "cntlid": 101, 00:21:39.157 "qid": 0, 00:21:39.157 "state": "enabled", 00:21:39.157 "thread": "nvmf_tgt_poll_group_000", 00:21:39.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:39.157 "listen_address": { 00:21:39.157 "trtype": "RDMA", 00:21:39.157 "adrfam": "IPv4", 00:21:39.157 "traddr": "192.168.100.8", 00:21:39.157 "trsvcid": "4420" 00:21:39.157 }, 00:21:39.157 "peer_address": { 00:21:39.157 "trtype": "RDMA", 00:21:39.157 "adrfam": "IPv4", 00:21:39.157 "traddr": "192.168.100.8", 00:21:39.157 "trsvcid": "60045" 00:21:39.157 }, 00:21:39.157 "auth": { 00:21:39.157 "state": "completed", 00:21:39.157 "digest": "sha512", 00:21:39.157 "dhgroup": "null" 00:21:39.157 } 00:21:39.157 } 00:21:39.157 ]' 00:21:39.157 19:16:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.157 19:16:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.157 19:16:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.157 19:16:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:39.157 19:16:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.157 19:16:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.157 19:16:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.157 19:16:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.416 19:16:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:21:39.416 19:16:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:21:39.984 19:16:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.243 19:16:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:40.243 19:16:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.243 19:16:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.243 19:16:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.243 19:16:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.243 19:16:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:40.243 19:16:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:40.503 19:16:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:40.503 19:16:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.503 19:16:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.503 19:16:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:40.503 19:16:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:40.503 19:16:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.503 19:16:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:40.503 19:16:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.503 19:16:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.503 19:16:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.503 19:16:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:40.503 19:16:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.503 19:16:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.762 00:21:40.762 19:16:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.762 19:16:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.762 19:16:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.762 19:16:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.762 19:16:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.762 19:16:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.762 19:16:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.020 19:16:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.020 19:16:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.020 { 00:21:41.021 "cntlid": 103, 00:21:41.021 "qid": 0, 00:21:41.021 "state": "enabled", 00:21:41.021 "thread": "nvmf_tgt_poll_group_000", 00:21:41.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:41.021 "listen_address": { 00:21:41.021 "trtype": "RDMA", 00:21:41.021 "adrfam": "IPv4", 00:21:41.021 "traddr": "192.168.100.8", 00:21:41.021 "trsvcid": "4420" 00:21:41.021 }, 00:21:41.021 "peer_address": { 00:21:41.021 "trtype": "RDMA", 00:21:41.021 "adrfam": "IPv4", 00:21:41.021 "traddr": "192.168.100.8", 00:21:41.021 "trsvcid": "37959" 00:21:41.021 }, 00:21:41.021 "auth": { 00:21:41.021 "state": "completed", 00:21:41.021 "digest": "sha512", 00:21:41.021 "dhgroup": "null" 00:21:41.021 } 00:21:41.021 } 00:21:41.021 ]' 00:21:41.021 19:16:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.021 19:16:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.021 19:16:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.021 19:16:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:41.021 19:16:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.021 19:16:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.021 19:16:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.021 19:16:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.280 19:16:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:21:41.280 19:16:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:21:41.850 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.850 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:41.850 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.850 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.850 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.850 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.850 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.850 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:41.850 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:42.110 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:42.110 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.110 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.110 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:42.110 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:42.110 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.110 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.110 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.110 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.110 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.110 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.110 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.110 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.370 00:21:42.370 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.370 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.370 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.630 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.630 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.630 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.630 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.630 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.630 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.630 { 00:21:42.630 "cntlid": 105, 00:21:42.630 "qid": 0, 00:21:42.630 "state": "enabled", 00:21:42.630 "thread": "nvmf_tgt_poll_group_000", 00:21:42.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:42.630 "listen_address": { 00:21:42.630 "trtype": "RDMA", 00:21:42.630 "adrfam": "IPv4", 00:21:42.630 "traddr": "192.168.100.8", 00:21:42.630 "trsvcid": "4420" 00:21:42.630 }, 00:21:42.630 "peer_address": { 00:21:42.630 "trtype": "RDMA", 00:21:42.630 "adrfam": "IPv4", 00:21:42.630 "traddr": "192.168.100.8", 00:21:42.630 "trsvcid": "32814" 00:21:42.630 }, 00:21:42.630 "auth": { 00:21:42.630 "state": "completed", 00:21:42.630 "digest": "sha512", 00:21:42.630 "dhgroup": "ffdhe2048" 00:21:42.630 } 00:21:42.630 } 00:21:42.630 ]' 00:21:42.630 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.630 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.630 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.630 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:42.630 19:16:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.630 19:16:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.630 19:16:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.890 19:16:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.891 19:16:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:21:42.891 19:16:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:21:43.460 19:16:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.721 19:16:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:43.721 19:16:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.721 19:16:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.721 19:16:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.721 19:16:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.721 19:16:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:43.721 19:16:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:43.980 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:43.980 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.980 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:43.980 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:43.980 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:43.980 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.981 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.981 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.981 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.981 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.981 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.981 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.981 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.241 00:21:44.241 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.241 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.241 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.501 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.501 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.501 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.501 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.501 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.501 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.501 { 00:21:44.501 "cntlid": 107, 00:21:44.501 "qid": 0, 00:21:44.501 "state": "enabled", 00:21:44.501 "thread": "nvmf_tgt_poll_group_000", 00:21:44.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:44.501 "listen_address": { 00:21:44.501 "trtype": "RDMA", 00:21:44.501 "adrfam": "IPv4", 00:21:44.501 "traddr": "192.168.100.8", 00:21:44.501 "trsvcid": "4420" 00:21:44.501 }, 00:21:44.501 "peer_address": { 00:21:44.501 "trtype": "RDMA", 00:21:44.501 "adrfam": "IPv4", 00:21:44.501 "traddr": "192.168.100.8", 00:21:44.501 "trsvcid": "40798" 00:21:44.501 }, 00:21:44.501 "auth": { 00:21:44.501 "state": "completed", 00:21:44.501 "digest": "sha512", 00:21:44.501 "dhgroup": "ffdhe2048" 00:21:44.501 } 00:21:44.501 } 00:21:44.501 ]' 00:21:44.501 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.501 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.501 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.501 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:44.501 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.501 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.501 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.501 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.761 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:21:44.761 19:16:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:21:45.331 19:16:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.591 19:16:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:45.591 19:16:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.591 19:16:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.591 19:16:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.591 19:16:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.591 19:16:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:45.591 19:16:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:45.591 19:16:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:45.591 19:16:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.591 19:16:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:45.591 19:16:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:45.591 19:16:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:45.591 19:16:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.591 19:16:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.591 19:16:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.591 19:16:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.591 19:16:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.591 19:16:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.591 19:16:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.591 19:16:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.851 00:21:46.112 19:16:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.112 19:16:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.112 19:16:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.112 19:16:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.112 19:16:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.112 19:16:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.112 19:16:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.112 19:16:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.112 19:16:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.112 { 00:21:46.112 "cntlid": 109, 00:21:46.112 "qid": 0, 00:21:46.112 "state": "enabled", 00:21:46.112 "thread": "nvmf_tgt_poll_group_000", 00:21:46.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:46.112 "listen_address": { 00:21:46.112 "trtype": "RDMA", 00:21:46.112 "adrfam": "IPv4", 00:21:46.112 "traddr": "192.168.100.8", 00:21:46.112 "trsvcid": "4420" 00:21:46.112 }, 00:21:46.112 "peer_address": { 00:21:46.112 "trtype": "RDMA", 00:21:46.112 "adrfam": "IPv4", 00:21:46.112 "traddr": "192.168.100.8", 00:21:46.112 "trsvcid": "60538" 00:21:46.112 }, 00:21:46.112 "auth": { 00:21:46.112 "state": "completed", 00:21:46.112 "digest": "sha512", 00:21:46.112 "dhgroup": "ffdhe2048" 00:21:46.112 } 00:21:46.112 } 00:21:46.112 ]' 00:21:46.112 19:16:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.112 19:16:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.112 19:16:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.372 19:16:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:46.372 19:16:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.372 19:16:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.372 19:16:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.372 19:16:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.632 19:16:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:21:46.632 19:16:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:21:47.202 19:16:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.202 19:16:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:47.202 19:16:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.202 19:16:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.202 19:16:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.202 19:16:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.202 19:16:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:47.202 19:16:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:47.462 19:16:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:47.462 19:16:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.462 19:16:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:47.462 19:16:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:47.462 19:16:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:47.462 19:16:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.462 19:16:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:47.462 19:16:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.462 19:16:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.462 19:16:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.462 19:16:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:47.462 19:16:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.462 19:16:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.722 00:21:47.722 19:16:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.722 19:16:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.722 19:16:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.981 19:16:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.981 19:16:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.981 19:16:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.981 19:16:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.981 19:16:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.981 19:16:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.981 { 00:21:47.981 "cntlid": 111, 00:21:47.981 "qid": 0, 00:21:47.981 "state": "enabled", 00:21:47.981 "thread": "nvmf_tgt_poll_group_000", 00:21:47.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:47.981 "listen_address": { 00:21:47.981 "trtype": "RDMA", 00:21:47.981 "adrfam": "IPv4", 00:21:47.981 "traddr": "192.168.100.8", 00:21:47.981 "trsvcid": "4420" 00:21:47.981 }, 00:21:47.981 "peer_address": { 00:21:47.981 "trtype": "RDMA", 00:21:47.981 "adrfam": "IPv4", 00:21:47.981 "traddr": "192.168.100.8", 00:21:47.981 "trsvcid": "49040" 00:21:47.981 }, 00:21:47.981 "auth": { 00:21:47.981 "state": "completed", 00:21:47.981 "digest": "sha512", 00:21:47.981 "dhgroup": "ffdhe2048" 00:21:47.981 } 00:21:47.981 } 00:21:47.981 ]' 00:21:47.981 19:16:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.981 19:16:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.982 19:16:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.982 19:16:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:47.982 19:16:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.982 19:16:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.982 19:16:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.982 19:16:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.241 19:16:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:21:48.241 19:16:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:21:48.810 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.071 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:49.071 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.071 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.071 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.071 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:49.071 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.071 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:49.071 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:49.071 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:49.071 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.071 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:49.071 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:49.071 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:49.071 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.071 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.071 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.071 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.071 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.071 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.071 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.071 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.331 00:21:49.331 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.331 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.331 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.592 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.592 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.592 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.592 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.592 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.592 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.592 { 00:21:49.592 "cntlid": 113, 00:21:49.592 "qid": 0, 00:21:49.592 "state": "enabled", 00:21:49.592 "thread": "nvmf_tgt_poll_group_000", 00:21:49.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:49.592 "listen_address": { 00:21:49.592 "trtype": "RDMA", 00:21:49.592 "adrfam": "IPv4", 00:21:49.592 "traddr": "192.168.100.8", 00:21:49.592 "trsvcid": "4420" 00:21:49.592 }, 00:21:49.592 "peer_address": { 00:21:49.592 "trtype": "RDMA", 00:21:49.592 "adrfam": "IPv4", 00:21:49.592 "traddr": "192.168.100.8", 00:21:49.592 "trsvcid": "47982" 00:21:49.592 }, 00:21:49.592 "auth": { 00:21:49.592 "state": "completed", 00:21:49.592 "digest": "sha512", 00:21:49.592 "dhgroup": "ffdhe3072" 00:21:49.592 } 00:21:49.592 } 00:21:49.592 ]' 00:21:49.592 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.592 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.592 19:16:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.852 19:16:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:49.852 19:16:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.852 19:16:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.852 19:16:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.852 19:16:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.111 19:16:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:21:50.111 19:16:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:21:50.680 19:16:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.680 19:16:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:50.680 19:16:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.680 19:16:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.680 19:16:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.680 19:16:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.680 19:16:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:50.680 19:16:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:50.947 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:50.947 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.947 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:50.947 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:50.947 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:50.947 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.947 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.947 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.947 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.947 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.947 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.947 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.947 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.208 00:21:51.208 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.208 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.208 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.467 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.468 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.468 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.468 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.468 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.468 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.468 { 00:21:51.468 "cntlid": 115, 00:21:51.468 "qid": 0, 00:21:51.468 "state": "enabled", 00:21:51.468 "thread": "nvmf_tgt_poll_group_000", 00:21:51.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:51.468 "listen_address": { 00:21:51.468 "trtype": "RDMA", 00:21:51.468 "adrfam": "IPv4", 00:21:51.468 "traddr": "192.168.100.8", 00:21:51.468 "trsvcid": "4420" 00:21:51.468 }, 00:21:51.468 "peer_address": { 00:21:51.468 "trtype": "RDMA", 00:21:51.468 "adrfam": "IPv4", 00:21:51.468 "traddr": "192.168.100.8", 00:21:51.468 "trsvcid": "56886" 00:21:51.468 }, 00:21:51.468 "auth": { 00:21:51.468 "state": "completed", 00:21:51.468 "digest": "sha512", 00:21:51.468 "dhgroup": "ffdhe3072" 00:21:51.468 } 00:21:51.468 } 00:21:51.468 ]' 00:21:51.468 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.468 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.468 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.468 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:51.468 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.468 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.468 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.468 19:16:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.728 19:16:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:21:51.728 19:16:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:21:52.297 19:16:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.557 19:16:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:52.557 19:16:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.557 19:16:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.557 19:16:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.557 19:16:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.557 19:16:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:52.557 19:16:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:52.818 19:16:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:52.818 19:16:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.818 19:16:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:52.818 19:16:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:52.818 19:16:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:52.818 19:16:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.818 19:16:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.818 19:16:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.818 19:16:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.818 19:16:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.818 19:16:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.818 19:16:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.818 19:16:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.077 00:21:53.077 19:16:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.077 19:16:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.077 19:16:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.077 19:16:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.077 19:16:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.077 19:16:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.077 19:16:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.337 19:16:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.337 19:16:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.337 { 00:21:53.337 "cntlid": 117, 00:21:53.337 "qid": 0, 00:21:53.337 "state": "enabled", 00:21:53.337 "thread": "nvmf_tgt_poll_group_000", 00:21:53.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:53.337 "listen_address": { 00:21:53.337 "trtype": "RDMA", 00:21:53.337 "adrfam": "IPv4", 00:21:53.337 "traddr": "192.168.100.8", 00:21:53.337 "trsvcid": "4420" 00:21:53.337 }, 00:21:53.337 "peer_address": { 00:21:53.338 "trtype": "RDMA", 00:21:53.338 "adrfam": "IPv4", 00:21:53.338 "traddr": "192.168.100.8", 00:21:53.338 "trsvcid": "41344" 00:21:53.338 }, 00:21:53.338 "auth": { 00:21:53.338 "state": "completed", 00:21:53.338 "digest": "sha512", 00:21:53.338 "dhgroup": "ffdhe3072" 00:21:53.338 } 00:21:53.338 } 00:21:53.338 ]' 00:21:53.338 19:16:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.338 19:16:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.338 19:16:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.338 19:16:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:53.338 19:16:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.338 19:16:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.338 19:16:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.338 19:16:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.597 19:16:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:21:53.598 19:16:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:21:54.167 19:16:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.167 19:16:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:54.167 19:16:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.167 19:16:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.167 19:16:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.167 19:16:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.167 19:16:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:54.167 19:16:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:54.427 19:16:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:54.427 19:16:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.427 19:16:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:54.427 19:16:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:54.427 19:16:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:54.427 19:16:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.427 19:16:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:54.427 19:16:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.427 19:16:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.427 19:16:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.427 19:16:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:54.427 19:16:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:54.427 19:16:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:54.688 00:21:54.688 19:16:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.688 19:16:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.688 19:16:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.947 19:16:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.947 19:16:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.947 19:16:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.947 19:16:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.947 19:16:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.947 19:16:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.947 { 00:21:54.947 "cntlid": 119, 00:21:54.947 "qid": 0, 00:21:54.947 "state": "enabled", 00:21:54.947 "thread": "nvmf_tgt_poll_group_000", 00:21:54.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:54.947 "listen_address": { 00:21:54.947 "trtype": "RDMA", 00:21:54.947 "adrfam": "IPv4", 00:21:54.947 "traddr": "192.168.100.8", 00:21:54.947 "trsvcid": "4420" 00:21:54.947 }, 00:21:54.947 "peer_address": { 00:21:54.947 "trtype": "RDMA", 00:21:54.947 "adrfam": "IPv4", 00:21:54.947 "traddr": "192.168.100.8", 00:21:54.947 "trsvcid": "34441" 00:21:54.947 }, 00:21:54.947 "auth": { 00:21:54.947 "state": "completed", 00:21:54.947 "digest": "sha512", 00:21:54.947 "dhgroup": "ffdhe3072" 00:21:54.947 } 00:21:54.947 } 00:21:54.947 ]' 00:21:54.947 19:16:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.947 19:16:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.947 19:16:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.947 19:16:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:54.947 19:16:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.207 19:16:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.208 19:16:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.208 19:16:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.208 19:16:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:21:55.208 19:16:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:21:55.867 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.227 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:56.227 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.227 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.227 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.227 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:56.227 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.227 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:56.228 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:56.228 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:56.228 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.228 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:56.228 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:56.228 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:56.228 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.228 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.228 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.228 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.228 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.228 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.228 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.228 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.505 00:21:56.505 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.505 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.505 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.830 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.830 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.830 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.830 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.830 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.830 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.830 { 00:21:56.830 "cntlid": 121, 00:21:56.830 "qid": 0, 00:21:56.830 "state": "enabled", 00:21:56.830 "thread": "nvmf_tgt_poll_group_000", 00:21:56.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:56.831 "listen_address": { 00:21:56.831 "trtype": "RDMA", 00:21:56.831 "adrfam": "IPv4", 00:21:56.831 "traddr": "192.168.100.8", 00:21:56.831 "trsvcid": "4420" 00:21:56.831 }, 00:21:56.831 "peer_address": { 00:21:56.831 "trtype": "RDMA", 00:21:56.831 "adrfam": "IPv4", 00:21:56.831 "traddr": "192.168.100.8", 00:21:56.831 "trsvcid": "54760" 00:21:56.831 }, 00:21:56.831 "auth": { 00:21:56.831 "state": "completed", 00:21:56.831 "digest": "sha512", 00:21:56.831 "dhgroup": "ffdhe4096" 00:21:56.831 } 00:21:56.831 } 00:21:56.831 ]' 00:21:56.831 19:16:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.831 19:16:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.831 19:16:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.831 19:16:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:56.831 19:16:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.831 19:16:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.831 19:16:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.831 19:16:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.150 19:16:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:21:57.150 19:16:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:21:57.719 19:16:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.719 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:57.719 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.719 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.719 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.719 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.719 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:57.719 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:57.978 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:57.978 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.978 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:57.978 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:57.978 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:57.978 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.978 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.978 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.978 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.978 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.978 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.978 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.978 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.237 00:21:58.237 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.237 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.237 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.496 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.496 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.496 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.496 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.496 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.496 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.496 { 00:21:58.496 "cntlid": 123, 00:21:58.496 "qid": 0, 00:21:58.496 "state": "enabled", 00:21:58.496 "thread": "nvmf_tgt_poll_group_000", 00:21:58.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:58.496 "listen_address": { 00:21:58.496 "trtype": "RDMA", 00:21:58.496 "adrfam": "IPv4", 00:21:58.496 "traddr": "192.168.100.8", 00:21:58.496 "trsvcid": "4420" 00:21:58.496 }, 00:21:58.496 "peer_address": { 00:21:58.496 "trtype": "RDMA", 00:21:58.496 "adrfam": "IPv4", 00:21:58.496 "traddr": "192.168.100.8", 00:21:58.496 "trsvcid": "34325" 00:21:58.496 }, 00:21:58.496 "auth": { 00:21:58.496 "state": "completed", 00:21:58.496 "digest": "sha512", 00:21:58.496 "dhgroup": "ffdhe4096" 00:21:58.496 } 00:21:58.496 } 00:21:58.496 ]' 00:21:58.496 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.496 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.496 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.496 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:58.496 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.755 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.755 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.755 19:16:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.755 19:16:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:21:58.755 19:16:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:21:59.692 19:16:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.692 19:16:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:59.692 19:16:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.692 19:16:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.692 19:16:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.692 19:16:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.692 19:16:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:59.692 19:16:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:59.692 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:59.692 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.692 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:59.692 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:59.692 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:59.692 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.692 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.692 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.692 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.692 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.692 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.693 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.693 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.951 00:22:00.210 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.211 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.211 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.211 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.211 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.211 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.211 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.211 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.211 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.211 { 00:22:00.211 "cntlid": 125, 00:22:00.211 "qid": 0, 00:22:00.211 "state": "enabled", 00:22:00.211 "thread": "nvmf_tgt_poll_group_000", 00:22:00.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:00.211 "listen_address": { 00:22:00.211 "trtype": "RDMA", 00:22:00.211 "adrfam": "IPv4", 00:22:00.211 "traddr": "192.168.100.8", 00:22:00.211 "trsvcid": "4420" 00:22:00.211 }, 00:22:00.211 "peer_address": { 00:22:00.211 "trtype": "RDMA", 00:22:00.211 "adrfam": "IPv4", 00:22:00.211 "traddr": "192.168.100.8", 00:22:00.211 "trsvcid": "38885" 00:22:00.211 }, 00:22:00.211 "auth": { 00:22:00.211 "state": "completed", 00:22:00.211 "digest": "sha512", 00:22:00.211 "dhgroup": "ffdhe4096" 00:22:00.211 } 00:22:00.211 } 00:22:00.211 ]' 00:22:00.211 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.211 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.211 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.470 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:00.470 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.470 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.470 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.470 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.729 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:22:00.729 19:16:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:22:01.298 19:16:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.298 19:16:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:01.298 19:16:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.298 19:16:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.298 19:16:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.298 19:16:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.298 19:16:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:01.298 19:16:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:01.557 19:16:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:01.557 19:16:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.557 19:16:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:01.557 19:16:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:01.557 19:16:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:01.557 19:16:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.557 19:16:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:01.557 19:16:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.557 19:16:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.557 19:16:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.557 19:16:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:01.557 19:16:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:01.557 19:16:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:01.816 00:22:01.816 19:16:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.816 19:16:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.816 19:16:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.075 19:16:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.075 19:16:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.075 19:16:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.075 19:16:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.075 19:16:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.075 19:16:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.075 { 00:22:02.075 "cntlid": 127, 00:22:02.075 "qid": 0, 00:22:02.075 "state": "enabled", 00:22:02.075 "thread": "nvmf_tgt_poll_group_000", 00:22:02.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:02.075 "listen_address": { 00:22:02.075 "trtype": "RDMA", 00:22:02.075 "adrfam": "IPv4", 00:22:02.075 "traddr": "192.168.100.8", 00:22:02.075 "trsvcid": "4420" 00:22:02.075 }, 00:22:02.075 "peer_address": { 00:22:02.075 "trtype": "RDMA", 00:22:02.075 "adrfam": "IPv4", 00:22:02.075 "traddr": "192.168.100.8", 00:22:02.075 "trsvcid": "59238" 00:22:02.075 }, 00:22:02.075 "auth": { 00:22:02.075 "state": "completed", 00:22:02.075 "digest": "sha512", 00:22:02.075 "dhgroup": "ffdhe4096" 00:22:02.075 } 00:22:02.075 } 00:22:02.075 ]' 00:22:02.075 19:16:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.075 19:16:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.075 19:16:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.075 19:16:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:02.075 19:16:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.075 19:16:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.075 19:16:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.075 19:16:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.334 19:16:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:22:02.334 19:16:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:22:02.903 19:16:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.162 19:16:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:03.162 19:16:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.162 19:16:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.162 19:16:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.162 19:16:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:03.162 19:16:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:03.162 19:16:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:03.162 19:16:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:03.421 19:16:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:03.421 19:16:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.421 19:16:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:03.421 19:16:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:03.421 19:16:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:03.421 19:16:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.421 19:16:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.422 19:16:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.422 19:16:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.422 19:16:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.422 19:16:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.422 19:16:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.422 19:16:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.681 00:22:03.681 19:16:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.681 19:16:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.681 19:16:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.940 19:16:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.940 19:16:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.940 19:16:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.940 19:16:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.940 19:16:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.940 19:16:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.940 { 00:22:03.940 "cntlid": 129, 00:22:03.940 "qid": 0, 00:22:03.940 "state": "enabled", 00:22:03.940 "thread": "nvmf_tgt_poll_group_000", 00:22:03.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:03.940 "listen_address": { 00:22:03.940 "trtype": "RDMA", 00:22:03.940 "adrfam": "IPv4", 00:22:03.940 "traddr": "192.168.100.8", 00:22:03.940 "trsvcid": "4420" 00:22:03.940 }, 00:22:03.940 "peer_address": { 00:22:03.940 "trtype": "RDMA", 00:22:03.940 "adrfam": "IPv4", 00:22:03.940 "traddr": "192.168.100.8", 00:22:03.940 "trsvcid": "44685" 00:22:03.940 }, 00:22:03.940 "auth": { 00:22:03.940 "state": "completed", 00:22:03.940 "digest": "sha512", 00:22:03.940 "dhgroup": "ffdhe6144" 00:22:03.940 } 00:22:03.940 } 00:22:03.940 ]' 00:22:03.940 19:16:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.940 19:16:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.940 19:16:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.940 19:16:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:03.940 19:16:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.940 19:16:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.940 19:16:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.940 19:16:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.199 19:16:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:22:04.199 19:16:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:22:04.767 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.027 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:05.027 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.027 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.027 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.027 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.027 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:05.027 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:05.027 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:05.027 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.027 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:05.027 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:05.027 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:05.027 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.027 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.027 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.027 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.027 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.027 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.027 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.027 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.595 00:22:05.595 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:05.595 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:05.595 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.595 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.595 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.595 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.595 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.595 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.595 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:05.595 { 00:22:05.595 "cntlid": 131, 00:22:05.595 "qid": 0, 00:22:05.595 "state": "enabled", 00:22:05.595 "thread": "nvmf_tgt_poll_group_000", 00:22:05.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:05.595 "listen_address": { 00:22:05.595 "trtype": "RDMA", 00:22:05.595 "adrfam": "IPv4", 00:22:05.595 "traddr": "192.168.100.8", 00:22:05.595 "trsvcid": "4420" 00:22:05.595 }, 00:22:05.595 "peer_address": { 00:22:05.595 "trtype": "RDMA", 00:22:05.595 "adrfam": "IPv4", 00:22:05.595 "traddr": "192.168.100.8", 00:22:05.595 "trsvcid": "51727" 00:22:05.595 }, 00:22:05.595 "auth": { 00:22:05.595 "state": "completed", 00:22:05.595 "digest": "sha512", 00:22:05.595 "dhgroup": "ffdhe6144" 00:22:05.595 } 00:22:05.595 } 00:22:05.595 ]' 00:22:05.595 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:05.855 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:05.855 19:16:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:05.855 19:16:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:05.855 19:16:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:05.855 19:16:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.855 19:16:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.855 19:16:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.114 19:16:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:22:06.114 19:16:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:22:06.682 19:16:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.682 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:06.682 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.682 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.682 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.682 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:06.682 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:06.682 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:06.941 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:06.941 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.941 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:06.941 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:06.941 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:06.941 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.941 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.941 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.941 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.941 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.941 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.941 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.941 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.200 00:22:07.460 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.460 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.460 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.460 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.460 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.460 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.460 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.460 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.460 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.460 { 00:22:07.460 "cntlid": 133, 00:22:07.460 "qid": 0, 00:22:07.460 "state": "enabled", 00:22:07.460 "thread": "nvmf_tgt_poll_group_000", 00:22:07.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:07.460 "listen_address": { 00:22:07.460 "trtype": "RDMA", 00:22:07.460 "adrfam": "IPv4", 00:22:07.460 "traddr": "192.168.100.8", 00:22:07.460 "trsvcid": "4420" 00:22:07.460 }, 00:22:07.460 "peer_address": { 00:22:07.460 "trtype": "RDMA", 00:22:07.460 "adrfam": "IPv4", 00:22:07.460 "traddr": "192.168.100.8", 00:22:07.460 "trsvcid": "52798" 00:22:07.460 }, 00:22:07.460 "auth": { 00:22:07.460 "state": "completed", 00:22:07.460 "digest": "sha512", 00:22:07.460 "dhgroup": "ffdhe6144" 00:22:07.460 } 00:22:07.460 } 00:22:07.460 ]' 00:22:07.460 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.460 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.460 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.719 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:07.719 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.719 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.719 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.719 19:16:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.978 19:16:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:22:07.978 19:16:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:22:08.545 19:16:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.545 19:16:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:08.545 19:16:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.545 19:16:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.545 19:16:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.545 19:16:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:08.545 19:16:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:08.545 19:16:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:08.804 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:08.804 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.804 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:08.804 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:08.804 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:08.804 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.804 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:08.804 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.804 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.804 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.804 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:08.804 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:08.804 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.063 00:22:09.063 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.063 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.063 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.325 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.325 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.325 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.325 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.325 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.325 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.325 { 00:22:09.325 "cntlid": 135, 00:22:09.325 "qid": 0, 00:22:09.325 "state": "enabled", 00:22:09.325 "thread": "nvmf_tgt_poll_group_000", 00:22:09.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:09.325 "listen_address": { 00:22:09.325 "trtype": "RDMA", 00:22:09.325 "adrfam": "IPv4", 00:22:09.325 "traddr": "192.168.100.8", 00:22:09.325 "trsvcid": "4420" 00:22:09.325 }, 00:22:09.325 "peer_address": { 00:22:09.325 "trtype": "RDMA", 00:22:09.325 "adrfam": "IPv4", 00:22:09.325 "traddr": "192.168.100.8", 00:22:09.325 "trsvcid": "37238" 00:22:09.325 }, 00:22:09.325 "auth": { 00:22:09.325 "state": "completed", 00:22:09.325 "digest": "sha512", 00:22:09.325 "dhgroup": "ffdhe6144" 00:22:09.325 } 00:22:09.325 } 00:22:09.325 ]' 00:22:09.325 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.325 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.325 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.325 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:09.583 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.583 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.584 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.584 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.584 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:22:09.584 19:16:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:22:10.522 19:16:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.522 19:16:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:10.522 19:16:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.522 19:16:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.522 19:16:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.522 19:16:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:10.522 19:16:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.522 19:16:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:10.522 19:16:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:10.522 19:16:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:10.522 19:16:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.522 19:16:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:10.522 19:16:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:10.522 19:16:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:10.522 19:16:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.522 19:16:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.522 19:16:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.522 19:16:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.522 19:16:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.522 19:16:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.522 19:16:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.522 19:16:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.089 00:22:11.089 19:16:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.089 19:16:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.089 19:16:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.348 19:16:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.348 19:16:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.348 19:16:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.348 19:16:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.348 19:16:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.348 19:16:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.348 { 00:22:11.348 "cntlid": 137, 00:22:11.348 "qid": 0, 00:22:11.348 "state": "enabled", 00:22:11.348 "thread": "nvmf_tgt_poll_group_000", 00:22:11.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:11.348 "listen_address": { 00:22:11.348 "trtype": "RDMA", 00:22:11.348 "adrfam": "IPv4", 00:22:11.348 "traddr": "192.168.100.8", 00:22:11.348 "trsvcid": "4420" 00:22:11.348 }, 00:22:11.348 "peer_address": { 00:22:11.348 "trtype": "RDMA", 00:22:11.348 "adrfam": "IPv4", 00:22:11.348 "traddr": "192.168.100.8", 00:22:11.348 "trsvcid": "35882" 00:22:11.348 }, 00:22:11.348 "auth": { 00:22:11.348 "state": "completed", 00:22:11.348 "digest": "sha512", 00:22:11.348 "dhgroup": "ffdhe8192" 00:22:11.348 } 00:22:11.348 } 00:22:11.348 ]' 00:22:11.348 19:16:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.348 19:16:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.348 19:16:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.348 19:16:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:11.348 19:16:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.348 19:16:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.348 19:16:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.348 19:16:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.607 19:16:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:22:11.607 19:16:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:22:12.175 19:16:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.434 19:16:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:12.434 19:16:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.434 19:16:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.434 19:16:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.434 19:16:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.434 19:16:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:12.434 19:16:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:12.694 19:16:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:12.694 19:16:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:12.694 19:16:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:12.694 19:16:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:12.694 19:16:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:12.694 19:16:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.694 19:16:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.694 19:16:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.694 19:16:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.694 19:16:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.694 19:16:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.694 19:16:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.694 19:16:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.263 00:22:13.263 19:16:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.263 19:16:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:13.263 19:16:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.263 19:16:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.263 19:16:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.263 19:16:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.263 19:16:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.263 19:16:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.263 19:16:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.263 { 00:22:13.263 "cntlid": 139, 00:22:13.263 "qid": 0, 00:22:13.263 "state": "enabled", 00:22:13.263 "thread": "nvmf_tgt_poll_group_000", 00:22:13.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:13.263 "listen_address": { 00:22:13.263 "trtype": "RDMA", 00:22:13.263 "adrfam": "IPv4", 00:22:13.263 "traddr": "192.168.100.8", 00:22:13.263 "trsvcid": "4420" 00:22:13.263 }, 00:22:13.263 "peer_address": { 00:22:13.263 "trtype": "RDMA", 00:22:13.263 "adrfam": "IPv4", 00:22:13.263 "traddr": "192.168.100.8", 00:22:13.263 "trsvcid": "52255" 00:22:13.263 }, 00:22:13.263 "auth": { 00:22:13.263 "state": "completed", 00:22:13.263 "digest": "sha512", 00:22:13.263 "dhgroup": "ffdhe8192" 00:22:13.263 } 00:22:13.263 } 00:22:13.263 ]' 00:22:13.263 19:16:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.263 19:16:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.263 19:16:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.522 19:16:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:13.522 19:16:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.522 19:16:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.522 19:16:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.522 19:16:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.781 19:16:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:22:13.781 19:16:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: --dhchap-ctrl-secret DHHC-1:02:MzkwYTUyMDllZDU2YmVjNThiYWNhYzc4MmUzMzI0NjU0ZTAyODQ4OTY4NTdmODI38BouUQ==: 00:22:14.350 19:16:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.350 19:16:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:14.350 19:16:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.350 19:16:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.350 19:16:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.350 19:16:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.350 19:16:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:14.350 19:16:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:14.609 19:16:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:14.609 19:16:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.609 19:16:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:14.609 19:16:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:14.609 19:16:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:14.609 19:16:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.609 19:16:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.609 19:16:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.609 19:16:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.609 19:16:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.609 19:16:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.609 19:16:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.609 19:16:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.178 00:22:15.179 19:16:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.179 19:16:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.179 19:16:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.179 19:16:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.179 19:16:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.179 19:16:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.179 19:16:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.179 19:16:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.179 19:16:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:15.179 { 00:22:15.179 "cntlid": 141, 00:22:15.179 "qid": 0, 00:22:15.179 "state": "enabled", 00:22:15.179 "thread": "nvmf_tgt_poll_group_000", 00:22:15.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:15.179 "listen_address": { 00:22:15.179 "trtype": "RDMA", 00:22:15.179 "adrfam": "IPv4", 00:22:15.179 "traddr": "192.168.100.8", 00:22:15.179 "trsvcid": "4420" 00:22:15.179 }, 00:22:15.179 "peer_address": { 00:22:15.179 "trtype": "RDMA", 00:22:15.179 "adrfam": "IPv4", 00:22:15.179 "traddr": "192.168.100.8", 00:22:15.179 "trsvcid": "60241" 00:22:15.179 }, 00:22:15.179 "auth": { 00:22:15.179 "state": "completed", 00:22:15.179 "digest": "sha512", 00:22:15.179 "dhgroup": "ffdhe8192" 00:22:15.179 } 00:22:15.179 } 00:22:15.179 ]' 00:22:15.179 19:16:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.438 19:16:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.438 19:16:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.438 19:16:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:15.438 19:16:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.438 19:16:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.438 19:16:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.438 19:16:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.697 19:16:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:22:15.698 19:16:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MDU2OGQ4ZjNlY2RhYzAyYWEzMWFjOTRlZmVkYzmO4Tqg: 00:22:16.265 19:16:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.265 19:16:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:16.265 19:16:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.265 19:16:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.265 19:16:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.265 19:16:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:16.265 19:16:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:16.265 19:16:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:16.525 19:16:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:16.525 19:16:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:16.525 19:16:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:16.525 19:16:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:16.525 19:16:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:16.525 19:16:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.525 19:16:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:16.525 19:16:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.525 19:16:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.525 19:16:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.525 19:16:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:16.525 19:16:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:16.525 19:16:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:17.093 00:22:17.093 19:16:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:17.093 19:16:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:17.093 19:16:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.093 19:16:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.093 19:16:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.093 19:16:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.093 19:16:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.093 19:16:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.352 19:16:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:17.352 { 00:22:17.352 "cntlid": 143, 00:22:17.352 "qid": 0, 00:22:17.352 "state": "enabled", 00:22:17.352 "thread": "nvmf_tgt_poll_group_000", 00:22:17.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:17.352 "listen_address": { 00:22:17.352 "trtype": "RDMA", 00:22:17.352 "adrfam": "IPv4", 00:22:17.352 "traddr": "192.168.100.8", 00:22:17.352 "trsvcid": "4420" 00:22:17.352 }, 00:22:17.352 "peer_address": { 00:22:17.352 "trtype": "RDMA", 00:22:17.352 "adrfam": "IPv4", 00:22:17.352 "traddr": "192.168.100.8", 00:22:17.352 "trsvcid": "54071" 00:22:17.352 }, 00:22:17.352 "auth": { 00:22:17.352 "state": "completed", 00:22:17.352 "digest": "sha512", 00:22:17.352 "dhgroup": "ffdhe8192" 00:22:17.352 } 00:22:17.352 } 00:22:17.352 ]' 00:22:17.352 19:16:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:17.352 19:16:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:17.352 19:16:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:17.352 19:16:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:17.352 19:16:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.352 19:16:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.352 19:16:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.352 19:16:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.610 19:16:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:22:17.611 19:16:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:22:18.178 19:16:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.179 19:16:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:18.179 19:16:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.179 19:16:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.179 19:16:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.179 19:16:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:18.179 19:16:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:18.179 19:16:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:18.179 19:16:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:18.179 19:16:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:18.179 19:16:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:18.438 19:16:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:18.438 19:16:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:18.438 19:16:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:18.438 19:16:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:18.438 19:16:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:18.438 19:16:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.438 19:16:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.438 19:16:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.438 19:16:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.438 19:16:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.438 19:16:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.438 19:16:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.438 19:16:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.007 00:22:19.007 19:16:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:19.007 19:16:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.007 19:16:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:19.266 19:16:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.266 19:16:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.266 19:16:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.266 19:16:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.266 19:16:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.266 19:16:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:19.266 { 00:22:19.266 "cntlid": 145, 00:22:19.266 "qid": 0, 00:22:19.266 "state": "enabled", 00:22:19.266 "thread": "nvmf_tgt_poll_group_000", 00:22:19.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:19.266 "listen_address": { 00:22:19.266 "trtype": "RDMA", 00:22:19.266 "adrfam": "IPv4", 00:22:19.266 "traddr": "192.168.100.8", 00:22:19.266 "trsvcid": "4420" 00:22:19.266 }, 00:22:19.266 "peer_address": { 00:22:19.266 "trtype": "RDMA", 00:22:19.266 "adrfam": "IPv4", 00:22:19.266 "traddr": "192.168.100.8", 00:22:19.266 "trsvcid": "36750" 00:22:19.266 }, 00:22:19.266 "auth": { 00:22:19.266 "state": "completed", 00:22:19.266 "digest": "sha512", 00:22:19.266 "dhgroup": "ffdhe8192" 00:22:19.266 } 00:22:19.266 } 00:22:19.266 ]' 00:22:19.266 19:16:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:19.266 19:16:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:19.266 19:16:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:19.266 19:16:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:19.266 19:16:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:19.266 19:16:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.266 19:16:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.266 19:16:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.526 19:16:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:22:19.526 19:16:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:YjE1ZmQyMWJjZjJhYzMwYTVlMWI0YjMyNTUxZjNjYWU0ZWE2NGM0OWU0Yjk2OTI4HAIuqw==: --dhchap-ctrl-secret DHHC-1:03:MmNkMzMzZWE0NDI4NTUzNzA1ZjljNTU0N2RkZTZkMmQwMDY0NjAzODgzMWZkODY0MWRlZDdmODA0M2U5MmFhNh0KgMY=: 00:22:20.094 19:16:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.353 19:16:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:20.353 19:16:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.353 19:16:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.353 19:16:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.353 19:16:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:22:20.353 19:16:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.354 19:16:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.354 19:16:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.354 19:16:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:20.354 19:16:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:20.354 19:16:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:20.354 19:16:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:20.354 19:16:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.354 19:16:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:20.354 19:16:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.354 19:16:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:20.354 19:16:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:20.354 19:16:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:20.927 request: 00:22:20.927 { 00:22:20.927 "name": "nvme0", 00:22:20.927 "trtype": "rdma", 00:22:20.927 "traddr": "192.168.100.8", 00:22:20.927 "adrfam": "ipv4", 00:22:20.927 "trsvcid": "4420", 00:22:20.927 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:20.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:20.927 "prchk_reftag": false, 00:22:20.927 "prchk_guard": false, 00:22:20.927 "hdgst": false, 00:22:20.927 "ddgst": false, 00:22:20.927 "dhchap_key": "key2", 00:22:20.927 "allow_unrecognized_csi": false, 00:22:20.927 "method": "bdev_nvme_attach_controller", 00:22:20.927 "req_id": 1 00:22:20.927 } 00:22:20.927 Got JSON-RPC error response 00:22:20.927 response: 00:22:20.927 { 00:22:20.927 "code": -5, 00:22:20.927 "message": "Input/output error" 00:22:20.927 } 00:22:20.927 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:20.927 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:20.927 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:20.927 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:20.927 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:20.927 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.927 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.927 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.927 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.927 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.927 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.927 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.927 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:20.927 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:20.927 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:20.927 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:20.927 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.927 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:20.927 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.927 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:20.927 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:20.927 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:21.185 request: 00:22:21.185 { 00:22:21.185 "name": "nvme0", 00:22:21.185 "trtype": "rdma", 00:22:21.185 "traddr": "192.168.100.8", 00:22:21.185 "adrfam": "ipv4", 00:22:21.185 "trsvcid": "4420", 00:22:21.185 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:21.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:21.185 "prchk_reftag": false, 00:22:21.185 "prchk_guard": false, 00:22:21.185 "hdgst": false, 00:22:21.185 "ddgst": false, 00:22:21.185 "dhchap_key": "key1", 00:22:21.185 "dhchap_ctrlr_key": "ckey2", 00:22:21.185 "allow_unrecognized_csi": false, 00:22:21.185 "method": "bdev_nvme_attach_controller", 00:22:21.185 "req_id": 1 00:22:21.185 } 00:22:21.185 Got JSON-RPC error response 00:22:21.185 response: 00:22:21.185 { 00:22:21.185 "code": -5, 00:22:21.185 "message": "Input/output error" 00:22:21.185 } 00:22:21.185 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:21.185 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:21.185 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:21.185 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:21.185 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:21.185 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.185 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.185 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.185 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:22:21.185 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.185 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.185 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.185 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.185 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:21.185 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.185 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:21.185 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:21.185 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:21.185 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:21.185 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.185 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.185 19:16:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.754 request: 00:22:21.754 { 00:22:21.754 "name": "nvme0", 00:22:21.754 "trtype": "rdma", 00:22:21.754 "traddr": "192.168.100.8", 00:22:21.754 "adrfam": "ipv4", 00:22:21.754 "trsvcid": "4420", 00:22:21.754 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:21.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:21.754 "prchk_reftag": false, 00:22:21.754 "prchk_guard": false, 00:22:21.754 "hdgst": false, 00:22:21.754 "ddgst": false, 00:22:21.754 "dhchap_key": "key1", 00:22:21.754 "dhchap_ctrlr_key": "ckey1", 00:22:21.754 "allow_unrecognized_csi": false, 00:22:21.754 "method": "bdev_nvme_attach_controller", 00:22:21.754 "req_id": 1 00:22:21.754 } 00:22:21.754 Got JSON-RPC error response 00:22:21.754 response: 00:22:21.754 { 00:22:21.754 "code": -5, 00:22:21.754 "message": "Input/output error" 00:22:21.754 } 00:22:21.754 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:21.754 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:21.754 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:21.754 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:21.754 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:21.754 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.754 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.754 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.754 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 321044 00:22:21.754 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 321044 ']' 00:22:21.754 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 321044 00:22:21.754 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:21.754 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.754 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 321044 00:22:21.754 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:21.754 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:21.754 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 321044' 00:22:21.754 killing process with pid 321044 00:22:21.754 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 321044 00:22:21.754 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 321044 00:22:22.014 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:22.014 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:22.014 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:22.014 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.014 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=346170 00:22:22.014 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:22.014 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 346170 00:22:22.014 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 346170 ']' 00:22:22.014 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.014 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:22.014 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.014 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:22.014 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.273 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:22.273 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:22.273 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:22.273 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:22.273 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.273 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.273 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:22.273 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 346170 00:22:22.273 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 346170 ']' 00:22:22.273 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.273 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:22.273 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.273 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:22.273 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.532 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:22.532 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:22.532 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:22.532 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.532 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.532 null0 00:22:22.792 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.792 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:22.792 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Ryy 00:22:22.792 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.792 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.792 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.792 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Ly6 ]] 00:22:22.792 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Ly6 00:22:22.793 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.793 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.793 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.793 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:22.793 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.w61 00:22:22.793 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.793 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.793 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.793 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.AHm ]] 00:22:22.793 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.AHm 00:22:22.793 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.793 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.793 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.793 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:22.793 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.eBA 00:22:22.793 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.793 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.793 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.793 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.CrG ]] 00:22:22.793 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.CrG 00:22:22.793 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.793 19:16:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.793 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.793 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:22.793 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.BV1 00:22:22.793 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.793 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.793 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.793 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:22.793 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:22.793 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:22.793 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:22.793 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:22.793 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:22.793 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.793 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:22.793 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.793 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.793 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.793 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:22.793 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:22.793 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:23.361 nvme0n1 00:22:23.620 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:23.620 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.620 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.620 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.620 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.620 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.620 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.620 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.620 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.620 { 00:22:23.620 "cntlid": 1, 00:22:23.620 "qid": 0, 00:22:23.620 "state": "enabled", 00:22:23.620 "thread": "nvmf_tgt_poll_group_000", 00:22:23.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:23.620 "listen_address": { 00:22:23.620 "trtype": "RDMA", 00:22:23.620 "adrfam": "IPv4", 00:22:23.620 "traddr": "192.168.100.8", 00:22:23.620 "trsvcid": "4420" 00:22:23.620 }, 00:22:23.620 "peer_address": { 00:22:23.620 "trtype": "RDMA", 00:22:23.620 "adrfam": "IPv4", 00:22:23.620 "traddr": "192.168.100.8", 00:22:23.620 "trsvcid": "58002" 00:22:23.620 }, 00:22:23.620 "auth": { 00:22:23.620 "state": "completed", 00:22:23.620 "digest": "sha512", 00:22:23.620 "dhgroup": "ffdhe8192" 00:22:23.620 } 00:22:23.620 } 00:22:23.620 ]' 00:22:23.620 19:16:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.879 19:16:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.879 19:16:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.879 19:16:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:23.879 19:16:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:23.879 19:16:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.879 19:16:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.879 19:16:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.139 19:16:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:22:24.139 19:16:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:22:24.708 19:16:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.708 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:24.708 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.708 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.708 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.708 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:24.708 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.708 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.708 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.708 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:24.708 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:24.968 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:24.968 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:24.968 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:24.968 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:24.968 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.968 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:24.968 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.968 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:24.968 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:24.968 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:25.227 request: 00:22:25.227 { 00:22:25.227 "name": "nvme0", 00:22:25.227 "trtype": "rdma", 00:22:25.227 "traddr": "192.168.100.8", 00:22:25.227 "adrfam": "ipv4", 00:22:25.227 "trsvcid": "4420", 00:22:25.227 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:25.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:25.227 "prchk_reftag": false, 00:22:25.227 "prchk_guard": false, 00:22:25.227 "hdgst": false, 00:22:25.227 "ddgst": false, 00:22:25.227 "dhchap_key": "key3", 00:22:25.227 "allow_unrecognized_csi": false, 00:22:25.227 "method": "bdev_nvme_attach_controller", 00:22:25.227 "req_id": 1 00:22:25.227 } 00:22:25.227 Got JSON-RPC error response 00:22:25.227 response: 00:22:25.227 { 00:22:25.227 "code": -5, 00:22:25.227 "message": "Input/output error" 00:22:25.227 } 00:22:25.227 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:25.227 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:25.227 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:25.227 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:25.227 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:25.227 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:25.227 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:25.227 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:25.486 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:25.487 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:25.487 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:25.487 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:25.487 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:25.487 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:25.487 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:25.487 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:25.487 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:25.487 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:25.745 request: 00:22:25.745 { 00:22:25.745 "name": "nvme0", 00:22:25.745 "trtype": "rdma", 00:22:25.745 "traddr": "192.168.100.8", 00:22:25.745 "adrfam": "ipv4", 00:22:25.745 "trsvcid": "4420", 00:22:25.745 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:25.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:25.745 "prchk_reftag": false, 00:22:25.745 "prchk_guard": false, 00:22:25.745 "hdgst": false, 00:22:25.745 "ddgst": false, 00:22:25.745 "dhchap_key": "key3", 00:22:25.745 "allow_unrecognized_csi": false, 00:22:25.745 "method": "bdev_nvme_attach_controller", 00:22:25.745 "req_id": 1 00:22:25.745 } 00:22:25.745 Got JSON-RPC error response 00:22:25.745 response: 00:22:25.745 { 00:22:25.745 "code": -5, 00:22:25.745 "message": "Input/output error" 00:22:25.745 } 00:22:25.745 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:25.745 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:25.745 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:25.745 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:25.745 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:25.745 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:25.745 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:25.745 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:25.745 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:25.745 19:16:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:25.745 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:25.745 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.745 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.746 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.746 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:25.746 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.746 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.005 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.005 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:26.005 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:26.005 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:26.005 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:26.005 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:26.005 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:26.005 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:26.005 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:26.005 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:26.005 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:26.264 request: 00:22:26.264 { 00:22:26.264 "name": "nvme0", 00:22:26.264 "trtype": "rdma", 00:22:26.264 "traddr": "192.168.100.8", 00:22:26.264 "adrfam": "ipv4", 00:22:26.264 "trsvcid": "4420", 00:22:26.264 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:26.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:26.264 "prchk_reftag": false, 00:22:26.264 "prchk_guard": false, 00:22:26.264 "hdgst": false, 00:22:26.264 "ddgst": false, 00:22:26.264 "dhchap_key": "key0", 00:22:26.264 "dhchap_ctrlr_key": "key1", 00:22:26.264 "allow_unrecognized_csi": false, 00:22:26.264 "method": "bdev_nvme_attach_controller", 00:22:26.264 "req_id": 1 00:22:26.264 } 00:22:26.264 Got JSON-RPC error response 00:22:26.264 response: 00:22:26.264 { 00:22:26.264 "code": -5, 00:22:26.264 "message": "Input/output error" 00:22:26.264 } 00:22:26.264 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:26.264 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:26.264 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:26.264 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:26.264 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:26.264 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:26.264 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:26.526 nvme0n1 00:22:26.526 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:26.526 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:26.526 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.785 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.785 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.785 19:17:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.044 19:17:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:22:27.044 19:17:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.045 19:17:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.045 19:17:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.045 19:17:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:27.045 19:17:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:27.045 19:17:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:27.613 nvme0n1 00:22:27.613 19:17:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:27.613 19:17:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:27.613 19:17:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.872 19:17:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.872 19:17:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:27.872 19:17:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.872 19:17:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.872 19:17:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.872 19:17:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:27.872 19:17:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:27.872 19:17:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.132 19:17:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.132 19:17:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:22:28.132 19:17:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: --dhchap-ctrl-secret DHHC-1:03:ODZhMTVmZjVjMWQzY2NiYjBiZjY5ZjI1NzdkOTQwZWNkNDFmYjZkNzQ2MTVhZWM4NGQzYzZmM2NmZDk1YzcwZgg19wg=: 00:22:28.700 19:17:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:28.700 19:17:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:28.700 19:17:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:28.700 19:17:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:28.700 19:17:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:28.700 19:17:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:28.700 19:17:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:28.700 19:17:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.700 19:17:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.959 19:17:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:28.959 19:17:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:28.959 19:17:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:28.959 19:17:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:28.959 19:17:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.959 19:17:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:28.959 19:17:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.959 19:17:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:28.959 19:17:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:28.959 19:17:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:29.527 request: 00:22:29.527 { 00:22:29.527 "name": "nvme0", 00:22:29.527 "trtype": "rdma", 00:22:29.527 "traddr": "192.168.100.8", 00:22:29.527 "adrfam": "ipv4", 00:22:29.527 "trsvcid": "4420", 00:22:29.527 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:29.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:29.527 "prchk_reftag": false, 00:22:29.527 "prchk_guard": false, 00:22:29.527 "hdgst": false, 00:22:29.527 "ddgst": false, 00:22:29.527 "dhchap_key": "key1", 00:22:29.527 "allow_unrecognized_csi": false, 00:22:29.527 "method": "bdev_nvme_attach_controller", 00:22:29.527 "req_id": 1 00:22:29.527 } 00:22:29.527 Got JSON-RPC error response 00:22:29.527 response: 00:22:29.527 { 00:22:29.527 "code": -5, 00:22:29.527 "message": "Input/output error" 00:22:29.527 } 00:22:29.527 19:17:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:29.527 19:17:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:29.527 19:17:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:29.527 19:17:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:29.527 19:17:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:29.527 19:17:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:29.527 19:17:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:30.095 nvme0n1 00:22:30.095 19:17:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:30.095 19:17:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:30.095 19:17:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.354 19:17:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.354 19:17:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.354 19:17:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.614 19:17:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:30.614 19:17:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.614 19:17:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.614 19:17:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.614 19:17:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:30.614 19:17:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:30.614 19:17:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:30.872 nvme0n1 00:22:30.872 19:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:30.872 19:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:30.872 19:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.872 19:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.872 19:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.872 19:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.130 19:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:31.130 19:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.130 19:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.130 19:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.130 19:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: '' 2s 00:22:31.130 19:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:31.130 19:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:31.130 19:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: 00:22:31.130 19:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:31.130 19:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:31.130 19:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:31.130 19:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: ]] 00:22:31.130 19:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OTJmYjdjZmUyYWUwZmM3YzVkZDRmY2JlMGExOTNiOTaZm8bU: 00:22:31.130 19:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:31.130 19:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:31.130 19:17:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:33.666 19:17:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:33.666 19:17:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:33.666 19:17:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:33.666 19:17:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:33.666 19:17:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:33.666 19:17:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:33.666 19:17:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:33.666 19:17:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:33.666 19:17:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.666 19:17:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.666 19:17:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.666 19:17:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: 2s 00:22:33.666 19:17:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:33.666 19:17:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:33.666 19:17:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:33.666 19:17:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: 00:22:33.666 19:17:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:33.666 19:17:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:33.667 19:17:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:33.667 19:17:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: ]] 00:22:33.667 19:17:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MGZjZjlmNTBlYjI5N2VjYzU3MzkxOGM1OTQ4OTIwMjg0ZGFmYWNmYjlmYzZiOTYwrOU/KQ==: 00:22:33.667 19:17:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:33.667 19:17:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:35.572 19:17:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:35.572 19:17:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:35.572 19:17:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:35.572 19:17:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:35.572 19:17:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:35.572 19:17:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:35.572 19:17:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:35.572 19:17:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.572 19:17:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:35.572 19:17:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.572 19:17:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.572 19:17:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.572 19:17:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:35.572 19:17:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:35.572 19:17:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:36.140 nvme0n1 00:22:36.140 19:17:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:36.140 19:17:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.140 19:17:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.140 19:17:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.140 19:17:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:36.140 19:17:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:36.708 19:17:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:36.708 19:17:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:36.708 19:17:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.967 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.967 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:36.967 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.967 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.967 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.967 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:36.967 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:36.967 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:36.967 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:36.967 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.226 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.226 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:37.226 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.226 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.226 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.226 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:37.226 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:37.226 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:37.226 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:37.226 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.226 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:37.226 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.226 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:37.226 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:37.795 request: 00:22:37.795 { 00:22:37.795 "name": "nvme0", 00:22:37.795 "dhchap_key": "key1", 00:22:37.795 "dhchap_ctrlr_key": "key3", 00:22:37.795 "method": "bdev_nvme_set_keys", 00:22:37.795 "req_id": 1 00:22:37.795 } 00:22:37.795 Got JSON-RPC error response 00:22:37.795 response: 00:22:37.795 { 00:22:37.795 "code": -13, 00:22:37.795 "message": "Permission denied" 00:22:37.795 } 00:22:37.795 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:37.795 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:37.795 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:37.795 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:37.795 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:37.795 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:37.795 19:17:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.795 19:17:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:37.795 19:17:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:39.175 19:17:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:39.175 19:17:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:39.175 19:17:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.175 19:17:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:39.175 19:17:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:39.175 19:17:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.175 19:17:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.175 19:17:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.175 19:17:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:39.175 19:17:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:39.175 19:17:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:39.743 nvme0n1 00:22:39.743 19:17:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:39.743 19:17:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.743 19:17:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.743 19:17:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.743 19:17:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:39.743 19:17:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:39.743 19:17:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:39.743 19:17:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:39.743 19:17:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:39.743 19:17:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:39.743 19:17:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:39.743 19:17:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:39.743 19:17:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:40.312 request: 00:22:40.312 { 00:22:40.312 "name": "nvme0", 00:22:40.312 "dhchap_key": "key2", 00:22:40.312 "dhchap_ctrlr_key": "key0", 00:22:40.312 "method": "bdev_nvme_set_keys", 00:22:40.312 "req_id": 1 00:22:40.312 } 00:22:40.312 Got JSON-RPC error response 00:22:40.312 response: 00:22:40.312 { 00:22:40.312 "code": -13, 00:22:40.312 "message": "Permission denied" 00:22:40.312 } 00:22:40.312 19:17:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:40.312 19:17:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:40.312 19:17:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:40.312 19:17:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:40.312 19:17:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:40.312 19:17:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:40.312 19:17:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.571 19:17:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:40.571 19:17:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:41.507 19:17:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:41.507 19:17:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:41.507 19:17:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.767 19:17:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:41.767 19:17:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:41.767 19:17:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:41.767 19:17:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 321176 00:22:41.767 19:17:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 321176 ']' 00:22:41.767 19:17:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 321176 00:22:41.767 19:17:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:41.767 19:17:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:41.767 19:17:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 321176 00:22:41.767 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:41.767 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:41.767 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 321176' 00:22:41.767 killing process with pid 321176 00:22:41.767 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 321176 00:22:41.767 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 321176 00:22:42.026 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:42.026 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:42.026 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:42.026 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:42.026 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:42.026 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:42.026 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:42.026 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:42.026 rmmod nvme_rdma 00:22:42.026 rmmod nvme_fabrics 00:22:42.026 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:42.027 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:42.027 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:42.027 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 346170 ']' 00:22:42.027 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 346170 00:22:42.027 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 346170 ']' 00:22:42.027 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 346170 00:22:42.027 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:42.027 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:42.027 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 346170 00:22:42.286 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:42.286 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:42.286 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 346170' 00:22:42.286 killing process with pid 346170 00:22:42.286 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 346170 00:22:42.286 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 346170 00:22:42.286 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:42.286 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:22:42.286 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Ryy /tmp/spdk.key-sha256.w61 /tmp/spdk.key-sha384.eBA /tmp/spdk.key-sha512.BV1 /tmp/spdk.key-sha512.Ly6 /tmp/spdk.key-sha384.AHm /tmp/spdk.key-sha256.CrG '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:22:42.286 00:22:42.286 real 2m46.719s 00:22:42.286 user 6m21.210s 00:22:42.286 sys 0m25.009s 00:22:42.286 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:42.286 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.286 ************************************ 00:22:42.286 END TEST nvmf_auth_target 00:22:42.286 ************************************ 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:42.584 ************************************ 00:22:42.584 START TEST nvmf_fuzz 00:22:42.584 ************************************ 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:22:42.584 * Looking for test storage... 00:22:42.584 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:42.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.584 --rc genhtml_branch_coverage=1 00:22:42.584 --rc genhtml_function_coverage=1 00:22:42.584 --rc genhtml_legend=1 00:22:42.584 --rc geninfo_all_blocks=1 00:22:42.584 --rc geninfo_unexecuted_blocks=1 00:22:42.584 00:22:42.584 ' 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:42.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.584 --rc genhtml_branch_coverage=1 00:22:42.584 --rc genhtml_function_coverage=1 00:22:42.584 --rc genhtml_legend=1 00:22:42.584 --rc geninfo_all_blocks=1 00:22:42.584 --rc geninfo_unexecuted_blocks=1 00:22:42.584 00:22:42.584 ' 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:42.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.584 --rc genhtml_branch_coverage=1 00:22:42.584 --rc genhtml_function_coverage=1 00:22:42.584 --rc genhtml_legend=1 00:22:42.584 --rc geninfo_all_blocks=1 00:22:42.584 --rc geninfo_unexecuted_blocks=1 00:22:42.584 00:22:42.584 ' 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:42.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.584 --rc genhtml_branch_coverage=1 00:22:42.584 --rc genhtml_function_coverage=1 00:22:42.584 --rc genhtml_legend=1 00:22:42.584 --rc geninfo_all_blocks=1 00:22:42.584 --rc geninfo_unexecuted_blocks=1 00:22:42.584 00:22:42.584 ' 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.584 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.585 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.585 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.585 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:22:42.585 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.585 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:22:42.585 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:42.585 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:42.585 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.585 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.585 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.585 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:42.585 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:42.585 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:42.585 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:42.585 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:42.585 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:22:42.585 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:22:42.585 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.585 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:42.585 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:42.585 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:42.585 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.585 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.585 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.844 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:42.844 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:42.844 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:22:42.844 19:17:16 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.961 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:50.962 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:50.962 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:50.962 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:50.962 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # rdma_device_init 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # uname 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@530 -- # allocate_nic_ips 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:50.962 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:50.962 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:50.962 altname enp217s0f0np0 00:22:50.962 altname ens818f0np0 00:22:50.962 inet 192.168.100.8/24 scope global mlx_0_0 00:22:50.962 valid_lft forever preferred_lft forever 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:50.962 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:50.962 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:50.962 altname enp217s0f1np1 00:22:50.962 altname ens818f1np1 00:22:50.962 inet 192.168.100.9/24 scope global mlx_0_1 00:22:50.962 valid_lft forever preferred_lft forever 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:50.962 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:50.963 19:17:23 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:22:50.963 192.168.100.9' 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:22:50.963 192.168.100.9' 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # head -n 1 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # tail -n +2 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:22:50.963 192.168.100.9' 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # head -n 1 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=353186 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 353186 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 353186 ']' 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:50.963 Malloc0 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:22:50.963 19:17:24 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:23:23.053 Fuzzing completed. Shutting down the fuzz application 00:23:23.053 00:23:23.053 Dumping successful admin opcodes: 00:23:23.053 9, 10, 00:23:23.053 Dumping successful io opcodes: 00:23:23.053 0, 9, 00:23:23.053 NS: 0x2000008eff00 I/O qp, Total commands completed: 1011100, total successful commands: 5922, random_seed: 3955388736 00:23:23.053 NS: 0x2000008eff00 admin qp, Total commands completed: 130320, total successful commands: 29, random_seed: 3606932416 00:23:23.053 19:17:54 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:23:23.053 Fuzzing completed. Shutting down the fuzz application 00:23:23.053 00:23:23.053 Dumping successful admin opcodes: 00:23:23.053 00:23:23.053 Dumping successful io opcodes: 00:23:23.053 00:23:23.053 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1858587968 00:23:23.053 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 1858653538 00:23:23.053 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:23.053 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.053 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:23.053 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.053 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:23.053 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:23:23.053 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:23.053 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:23:23.053 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:23:23.053 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:23:23.054 rmmod nvme_rdma 00:23:23.054 rmmod nvme_fabrics 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 353186 ']' 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 353186 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 353186 ']' 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 353186 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 353186 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 353186' 00:23:23.054 killing process with pid 353186 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 353186 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 353186 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:23:23.054 00:23:23.054 real 0m39.742s 00:23:23.054 user 0m49.740s 00:23:23.054 sys 0m21.424s 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:23.054 ************************************ 00:23:23.054 END TEST nvmf_fuzz 00:23:23.054 ************************************ 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:23.054 ************************************ 00:23:23.054 START TEST nvmf_multiconnection 00:23:23.054 ************************************ 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:23:23.054 * Looking for test storage... 00:23:23.054 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:23.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.054 --rc genhtml_branch_coverage=1 00:23:23.054 --rc genhtml_function_coverage=1 00:23:23.054 --rc genhtml_legend=1 00:23:23.054 --rc geninfo_all_blocks=1 00:23:23.054 --rc geninfo_unexecuted_blocks=1 00:23:23.054 00:23:23.054 ' 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:23.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.054 --rc genhtml_branch_coverage=1 00:23:23.054 --rc genhtml_function_coverage=1 00:23:23.054 --rc genhtml_legend=1 00:23:23.054 --rc geninfo_all_blocks=1 00:23:23.054 --rc geninfo_unexecuted_blocks=1 00:23:23.054 00:23:23.054 ' 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:23.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.054 --rc genhtml_branch_coverage=1 00:23:23.054 --rc genhtml_function_coverage=1 00:23:23.054 --rc genhtml_legend=1 00:23:23.054 --rc geninfo_all_blocks=1 00:23:23.054 --rc geninfo_unexecuted_blocks=1 00:23:23.054 00:23:23.054 ' 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:23.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.054 --rc genhtml_branch_coverage=1 00:23:23.054 --rc genhtml_function_coverage=1 00:23:23.054 --rc genhtml_legend=1 00:23:23.054 --rc geninfo_all_blocks=1 00:23:23.054 --rc geninfo_unexecuted_blocks=1 00:23:23.054 00:23:23.054 ' 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.054 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:23.055 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:23:23.055 19:17:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:29.630 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:29.630 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.630 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:29.631 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:29.631 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # rdma_device_init 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # uname 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@66 -- # modprobe ib_cm 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@67 -- # modprobe ib_core 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@68 -- # modprobe ib_umad 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@70 -- # modprobe iw_cm 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@530 -- # allocate_nic_ips 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # get_rdma_if_list 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:23:29.631 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:29.631 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:29.631 altname enp217s0f0np0 00:23:29.631 altname ens818f0np0 00:23:29.631 inet 192.168.100.8/24 scope global mlx_0_0 00:23:29.631 valid_lft forever preferred_lft forever 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:23:29.631 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:29.631 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:29.631 altname enp217s0f1np1 00:23:29.631 altname ens818f1np1 00:23:29.631 inet 192.168.100.9/24 scope global mlx_0_1 00:23:29.631 valid_lft forever preferred_lft forever 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # get_rdma_if_list 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:23:29.631 192.168.100.9' 00:23:29.631 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # head -n 1 00:23:29.632 19:18:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:23:29.632 192.168.100.9' 00:23:29.891 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:29.891 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:23:29.891 192.168.100.9' 00:23:29.891 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # tail -n +2 00:23:29.891 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # head -n 1 00:23:29.891 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:29.891 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:23:29.891 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:29.891 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:23:29.891 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:23:29.891 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:23:29.891 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:23:29.891 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:29.891 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:29.891 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:29.891 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=362111 00:23:29.891 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 362111 00:23:29.891 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:29.891 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 362111 ']' 00:23:29.891 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.891 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.891 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.891 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.891 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:29.891 [2024-12-13 19:18:04.103605] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:29.892 [2024-12-13 19:18:04.103655] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.892 [2024-12-13 19:18:04.198498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:29.892 [2024-12-13 19:18:04.222207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.892 [2024-12-13 19:18:04.222243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.892 [2024-12-13 19:18:04.222253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.892 [2024-12-13 19:18:04.222262] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.892 [2024-12-13 19:18:04.222269] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.892 [2024-12-13 19:18:04.224070] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.892 [2024-12-13 19:18:04.224133] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.892 [2024-12-13 19:18:04.224245] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.892 [2024-12-13 19:18:04.224246] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:23:30.151 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.151 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:23:30.151 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:30.151 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:30.151 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.151 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.151 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:30.152 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.152 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.152 [2024-12-13 19:18:04.387526] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e34540/0x1e389f0) succeed. 00:23:30.152 [2024-12-13 19:18:04.397179] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e35b80/0x1e7a090) succeed. 00:23:30.152 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.152 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.412 Malloc1 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.412 [2024-12-13 19:18:04.586386] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.412 Malloc2 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.412 Malloc3 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.412 Malloc4 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:23:30.412 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.413 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.413 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.413 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:23:30.413 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.413 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.413 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.413 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:30.413 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:23:30.413 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.413 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.413 Malloc5 00:23:30.413 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.413 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:23:30.413 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.413 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.413 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.413 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:23:30.413 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.413 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.673 Malloc6 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.673 Malloc7 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.673 Malloc8 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.673 Malloc9 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.673 19:18:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.673 Malloc10 00:23:30.673 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.673 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:23:30.673 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.673 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.673 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.673 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:23:30.673 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.673 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.673 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.673 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:23:30.673 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.673 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.673 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.673 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:30.673 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:23:30.673 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.674 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.933 Malloc11 00:23:30.933 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.933 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:23:30.933 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.933 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.933 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.933 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:23:30.933 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.933 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.933 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.933 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:23:30.933 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.933 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:30.933 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.933 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:23:30.933 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:30.933 19:18:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:23:31.870 19:18:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:23:31.870 19:18:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:23:31.870 19:18:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:31.870 19:18:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:23:31.870 19:18:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:23:33.787 19:18:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:33.787 19:18:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:33.787 19:18:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:23:33.787 19:18:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:23:33.787 19:18:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:33.787 19:18:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:23:33.787 19:18:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:33.787 19:18:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:23:34.725 19:18:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:23:34.725 19:18:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:23:34.725 19:18:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:34.725 19:18:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:23:34.725 19:18:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:23:37.262 19:18:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:37.262 19:18:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:37.262 19:18:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:23:37.262 19:18:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:23:37.262 19:18:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:37.262 19:18:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:23:37.262 19:18:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:37.262 19:18:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:23:37.830 19:18:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:23:37.830 19:18:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:23:37.830 19:18:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:37.830 19:18:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:23:37.830 19:18:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:23:39.733 19:18:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:39.733 19:18:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:39.733 19:18:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:23:39.992 19:18:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:23:39.992 19:18:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:39.992 19:18:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:23:39.992 19:18:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:39.992 19:18:14 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:23:40.928 19:18:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:23:40.928 19:18:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:23:40.928 19:18:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:40.928 19:18:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:23:40.928 19:18:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:23:42.830 19:18:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:42.830 19:18:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:42.830 19:18:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:23:42.830 19:18:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:23:42.830 19:18:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:42.830 19:18:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:23:42.830 19:18:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:42.830 19:18:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:23:43.768 19:18:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:23:43.768 19:18:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:23:43.768 19:18:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:43.768 19:18:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:23:43.768 19:18:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:23:46.302 19:18:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:46.302 19:18:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:46.302 19:18:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:23:46.302 19:18:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:23:46.302 19:18:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:46.302 19:18:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:23:46.302 19:18:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:46.302 19:18:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:23:46.870 19:18:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:23:46.870 19:18:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:23:46.870 19:18:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:46.870 19:18:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:23:46.870 19:18:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:23:48.775 19:18:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:48.775 19:18:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:48.775 19:18:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:23:48.775 19:18:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:23:48.775 19:18:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:48.775 19:18:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:23:48.775 19:18:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:48.775 19:18:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:23:50.152 19:18:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:23:50.152 19:18:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:23:50.152 19:18:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:50.152 19:18:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:23:50.152 19:18:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:23:52.055 19:18:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:52.055 19:18:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:52.055 19:18:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:23:52.055 19:18:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:23:52.055 19:18:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:52.055 19:18:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:23:52.055 19:18:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:52.055 19:18:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:23:52.992 19:18:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:23:52.992 19:18:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:23:52.992 19:18:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:52.992 19:18:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:23:52.992 19:18:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:23:54.896 19:18:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:54.896 19:18:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:54.896 19:18:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:23:54.896 19:18:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:23:54.896 19:18:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:54.896 19:18:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:23:54.896 19:18:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:54.896 19:18:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:23:55.832 19:18:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:23:55.832 19:18:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:23:55.832 19:18:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:55.832 19:18:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:23:55.832 19:18:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:23:58.369 19:18:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:58.369 19:18:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:58.369 19:18:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:23:58.369 19:18:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:23:58.369 19:18:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:58.369 19:18:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:23:58.369 19:18:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:58.369 19:18:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:23:58.938 19:18:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:23:58.938 19:18:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:23:58.938 19:18:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:58.938 19:18:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:23:58.938 19:18:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:00.843 19:18:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:00.843 19:18:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:00.843 19:18:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:24:00.843 19:18:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:00.843 19:18:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:00.843 19:18:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:00.843 19:18:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:00.843 19:18:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:24:02.219 19:18:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:02.219 19:18:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:02.219 19:18:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:02.219 19:18:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:02.219 19:18:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:04.124 19:18:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:04.124 19:18:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:04.124 19:18:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:24:04.124 19:18:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:04.124 19:18:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:04.124 19:18:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:04.124 19:18:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:04.124 [global] 00:24:04.124 thread=1 00:24:04.124 invalidate=1 00:24:04.124 rw=read 00:24:04.124 time_based=1 00:24:04.124 runtime=10 00:24:04.124 ioengine=libaio 00:24:04.124 direct=1 00:24:04.124 bs=262144 00:24:04.124 iodepth=64 00:24:04.124 norandommap=1 00:24:04.124 numjobs=1 00:24:04.124 00:24:04.124 [job0] 00:24:04.124 filename=/dev/nvme0n1 00:24:04.124 [job1] 00:24:04.124 filename=/dev/nvme10n1 00:24:04.124 [job2] 00:24:04.124 filename=/dev/nvme1n1 00:24:04.124 [job3] 00:24:04.124 filename=/dev/nvme2n1 00:24:04.124 [job4] 00:24:04.124 filename=/dev/nvme3n1 00:24:04.124 [job5] 00:24:04.124 filename=/dev/nvme4n1 00:24:04.124 [job6] 00:24:04.124 filename=/dev/nvme5n1 00:24:04.124 [job7] 00:24:04.124 filename=/dev/nvme6n1 00:24:04.124 [job8] 00:24:04.124 filename=/dev/nvme7n1 00:24:04.124 [job9] 00:24:04.124 filename=/dev/nvme8n1 00:24:04.124 [job10] 00:24:04.124 filename=/dev/nvme9n1 00:24:04.124 Could not set queue depth (nvme0n1) 00:24:04.124 Could not set queue depth (nvme10n1) 00:24:04.124 Could not set queue depth (nvme1n1) 00:24:04.124 Could not set queue depth (nvme2n1) 00:24:04.124 Could not set queue depth (nvme3n1) 00:24:04.124 Could not set queue depth (nvme4n1) 00:24:04.124 Could not set queue depth (nvme5n1) 00:24:04.124 Could not set queue depth (nvme6n1) 00:24:04.124 Could not set queue depth (nvme7n1) 00:24:04.124 Could not set queue depth (nvme8n1) 00:24:04.124 Could not set queue depth (nvme9n1) 00:24:04.689 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:04.689 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:04.689 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:04.689 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:04.689 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:04.689 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:04.689 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:04.689 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:04.689 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:04.689 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:04.689 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:04.689 fio-3.35 00:24:04.689 Starting 11 threads 00:24:16.902 00:24:16.902 job0: (groupid=0, jobs=1): err= 0: pid=368685: Fri Dec 13 19:18:49 2024 00:24:16.902 read: IOPS=1427, BW=357MiB/s (374MB/s)(3584MiB/10042msec) 00:24:16.902 slat (usec): min=11, max=18562, avg=690.72, stdev=1742.48 00:24:16.902 clat (usec): min=12301, max=87196, avg=44089.94, stdev=12084.34 00:24:16.902 lat (usec): min=12564, max=87219, avg=44780.66, stdev=12349.35 00:24:16.902 clat percentiles (usec): 00:24:16.902 | 1.00th=[27132], 5.00th=[27919], 10.00th=[28705], 20.00th=[29754], 00:24:16.902 | 30.00th=[31327], 40.00th=[42730], 50.00th=[44303], 60.00th=[46924], 00:24:16.902 | 70.00th=[55837], 80.00th=[57410], 90.00th=[58983], 95.00th=[60556], 00:24:16.902 | 99.00th=[64750], 99.50th=[67634], 99.90th=[74974], 99.95th=[79168], 00:24:16.902 | 99.99th=[87557] 00:24:16.902 bw ( KiB/s): min=272384, max=552448, per=10.84%, avg=365421.00, stdev=103256.48, samples=20 00:24:16.902 iops : min= 1064, max= 2158, avg=1427.40, stdev=403.36, samples=20 00:24:16.902 lat (msec) : 20=0.22%, 50=66.40%, 100=33.38% 00:24:16.902 cpu : usr=0.56%, sys=5.53%, ctx=2780, majf=0, minf=4097 00:24:16.902 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:24:16.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:16.902 issued rwts: total=14336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.902 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:16.903 job1: (groupid=0, jobs=1): err= 0: pid=368686: Fri Dec 13 19:18:49 2024 00:24:16.903 read: IOPS=922, BW=231MiB/s (242MB/s)(2321MiB/10061msec) 00:24:16.903 slat (usec): min=10, max=35300, avg=1070.39, stdev=3581.15 00:24:16.903 clat (msec): min=12, max=130, avg=68.22, stdev=12.22 00:24:16.903 lat (msec): min=12, max=130, avg=69.29, stdev=12.81 00:24:16.903 clat percentiles (msec): 00:24:16.903 | 1.00th=[ 31], 5.00th=[ 33], 10.00th=[ 61], 20.00th=[ 69], 00:24:16.903 | 30.00th=[ 70], 40.00th=[ 70], 50.00th=[ 71], 60.00th=[ 71], 00:24:16.903 | 70.00th=[ 72], 80.00th=[ 73], 90.00th=[ 75], 95.00th=[ 79], 00:24:16.903 | 99.00th=[ 97], 99.50th=[ 102], 99.90th=[ 109], 99.95th=[ 130], 00:24:16.903 | 99.99th=[ 131] 00:24:16.903 bw ( KiB/s): min=197120, max=435712, per=7.00%, avg=236032.00, stdev=48156.09, samples=20 00:24:16.903 iops : min= 770, max= 1702, avg=922.00, stdev=188.11, samples=20 00:24:16.903 lat (msec) : 20=0.38%, 50=8.09%, 100=90.95%, 250=0.58% 00:24:16.903 cpu : usr=0.37%, sys=3.14%, ctx=1775, majf=0, minf=4097 00:24:16.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:16.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:16.903 issued rwts: total=9283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.903 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:16.903 job2: (groupid=0, jobs=1): err= 0: pid=368687: Fri Dec 13 19:18:49 2024 00:24:16.903 read: IOPS=1628, BW=407MiB/s (427MB/s)(4088MiB/10042msec) 00:24:16.903 slat (usec): min=11, max=17167, avg=603.99, stdev=1602.60 00:24:16.903 clat (usec): min=11666, max=82905, avg=38663.51, stdev=11516.23 00:24:16.903 lat (usec): min=11915, max=82934, avg=39267.50, stdev=11755.83 00:24:16.903 clat percentiles (usec): 00:24:16.903 | 1.00th=[27132], 5.00th=[27919], 10.00th=[28443], 20.00th=[28967], 00:24:16.903 | 30.00th=[29754], 40.00th=[30278], 50.00th=[31589], 60.00th=[42730], 00:24:16.903 | 70.00th=[44827], 80.00th=[47973], 90.00th=[58983], 95.00th=[60031], 00:24:16.903 | 99.00th=[64226], 99.50th=[67634], 99.90th=[74974], 99.95th=[79168], 00:24:16.903 | 99.99th=[83362] 00:24:16.903 bw ( KiB/s): min=269824, max=544256, per=12.36%, avg=416947.20, stdev=112910.41, samples=20 00:24:16.903 iops : min= 1054, max= 2126, avg=1628.70, stdev=441.06, samples=20 00:24:16.903 lat (msec) : 20=0.31%, 50=82.69%, 100=17.00% 00:24:16.903 cpu : usr=0.47%, sys=4.75%, ctx=3187, majf=0, minf=4097 00:24:16.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:24:16.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:16.903 issued rwts: total=16350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.903 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:16.903 job3: (groupid=0, jobs=1): err= 0: pid=368688: Fri Dec 13 19:18:49 2024 00:24:16.903 read: IOPS=1168, BW=292MiB/s (306MB/s)(2932MiB/10041msec) 00:24:16.903 slat (usec): min=12, max=33816, avg=838.38, stdev=2205.59 00:24:16.903 clat (msec): min=10, max=120, avg=53.90, stdev= 8.60 00:24:16.903 lat (msec): min=11, max=120, avg=54.74, stdev= 8.92 00:24:16.903 clat percentiles (msec): 00:24:16.903 | 1.00th=[ 40], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 45], 00:24:16.903 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 58], 60.00th=[ 58], 00:24:16.903 | 70.00th=[ 59], 80.00th=[ 60], 90.00th=[ 62], 95.00th=[ 64], 00:24:16.903 | 99.00th=[ 85], 99.50th=[ 87], 99.90th=[ 89], 99.95th=[ 92], 00:24:16.903 | 99.99th=[ 121] 00:24:16.903 bw ( KiB/s): min=249344, max=368640, per=8.85%, avg=298624.00, stdev=37533.43, samples=20 00:24:16.903 iops : min= 974, max= 1440, avg=1166.50, stdev=146.61, samples=20 00:24:16.903 lat (msec) : 20=0.28%, 50=37.56%, 100=62.14%, 250=0.02% 00:24:16.903 cpu : usr=0.51%, sys=5.23%, ctx=2266, majf=0, minf=4097 00:24:16.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:24:16.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:16.903 issued rwts: total=11728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.903 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:16.903 job4: (groupid=0, jobs=1): err= 0: pid=368689: Fri Dec 13 19:18:49 2024 00:24:16.903 read: IOPS=1183, BW=296MiB/s (310MB/s)(2972MiB/10041msec) 00:24:16.903 slat (usec): min=10, max=15865, avg=818.38, stdev=1992.66 00:24:16.903 clat (usec): min=8606, max=79115, avg=53189.69, stdev=8638.91 00:24:16.903 lat (usec): min=8851, max=79138, avg=54008.07, stdev=8930.98 00:24:16.903 clat percentiles (usec): 00:24:16.903 | 1.00th=[30802], 5.00th=[42206], 10.00th=[42730], 20.00th=[43779], 00:24:16.903 | 30.00th=[45351], 40.00th=[55313], 50.00th=[57410], 60.00th=[57934], 00:24:16.903 | 70.00th=[58983], 80.00th=[60031], 90.00th=[61604], 95.00th=[63177], 00:24:16.903 | 99.00th=[67634], 99.50th=[68682], 99.90th=[73925], 99.95th=[76022], 00:24:16.903 | 99.99th=[79168] 00:24:16.903 bw ( KiB/s): min=262144, max=369664, per=8.98%, avg=302729.60, stdev=39858.96, samples=20 00:24:16.903 iops : min= 1024, max= 1444, avg=1182.50, stdev=155.65, samples=20 00:24:16.903 lat (msec) : 10=0.10%, 20=0.41%, 50=37.44%, 100=62.04% 00:24:16.903 cpu : usr=0.36%, sys=4.05%, ctx=2528, majf=0, minf=4097 00:24:16.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:24:16.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:16.903 issued rwts: total=11887,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.903 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:16.903 job5: (groupid=0, jobs=1): err= 0: pid=368690: Fri Dec 13 19:18:49 2024 00:24:16.903 read: IOPS=911, BW=228MiB/s (239MB/s)(2293MiB/10061msec) 00:24:16.903 slat (usec): min=14, max=40918, avg=1081.22, stdev=3615.81 00:24:16.903 clat (msec): min=8, max=118, avg=69.05, stdev=10.39 00:24:16.903 lat (msec): min=8, max=118, avg=70.13, stdev=11.04 00:24:16.903 clat percentiles (msec): 00:24:16.903 | 1.00th=[ 26], 5.00th=[ 47], 10.00th=[ 62], 20.00th=[ 69], 00:24:16.903 | 30.00th=[ 70], 40.00th=[ 70], 50.00th=[ 71], 60.00th=[ 71], 00:24:16.903 | 70.00th=[ 72], 80.00th=[ 72], 90.00th=[ 74], 95.00th=[ 79], 00:24:16.903 | 99.00th=[ 96], 99.50th=[ 105], 99.90th=[ 114], 99.95th=[ 117], 00:24:16.903 | 99.99th=[ 120] 00:24:16.903 bw ( KiB/s): min=199680, max=359424, per=6.91%, avg=233164.80, stdev=31488.01, samples=20 00:24:16.903 iops : min= 780, max= 1404, avg=910.80, stdev=123.00, samples=20 00:24:16.903 lat (msec) : 10=0.19%, 20=0.58%, 50=6.80%, 100=91.71%, 250=0.72% 00:24:16.903 cpu : usr=0.45%, sys=3.74%, ctx=1718, majf=0, minf=4097 00:24:16.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:16.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:16.903 issued rwts: total=9172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.903 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:16.903 job6: (groupid=0, jobs=1): err= 0: pid=368691: Fri Dec 13 19:18:49 2024 00:24:16.903 read: IOPS=1425, BW=356MiB/s (374MB/s)(3579MiB/10042msec) 00:24:16.903 slat (usec): min=12, max=22169, avg=695.01, stdev=1946.11 00:24:16.903 clat (usec): min=10506, max=83343, avg=44154.62, stdev=12229.62 00:24:16.903 lat (usec): min=10862, max=85065, avg=44849.63, stdev=12520.34 00:24:16.903 clat percentiles (usec): 00:24:16.903 | 1.00th=[27132], 5.00th=[27919], 10.00th=[28705], 20.00th=[29754], 00:24:16.903 | 30.00th=[31327], 40.00th=[42730], 50.00th=[44303], 60.00th=[46924], 00:24:16.903 | 70.00th=[55837], 80.00th=[57410], 90.00th=[58983], 95.00th=[60556], 00:24:16.903 | 99.00th=[66323], 99.50th=[72877], 99.90th=[80217], 99.95th=[81265], 00:24:16.903 | 99.99th=[83362] 00:24:16.903 bw ( KiB/s): min=270336, max=553984, per=10.82%, avg=364884.05, stdev=102889.31, samples=20 00:24:16.903 iops : min= 1056, max= 2164, avg=1425.30, stdev=401.92, samples=20 00:24:16.903 lat (msec) : 20=0.26%, 50=65.92%, 100=33.82% 00:24:16.903 cpu : usr=0.55%, sys=5.62%, ctx=2611, majf=0, minf=4097 00:24:16.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:24:16.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:16.903 issued rwts: total=14315,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.903 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:16.903 job7: (groupid=0, jobs=1): err= 0: pid=368692: Fri Dec 13 19:18:49 2024 00:24:16.903 read: IOPS=918, BW=230MiB/s (241MB/s)(2310MiB/10063msec) 00:24:16.903 slat (usec): min=10, max=32420, avg=1075.77, stdev=3386.38 00:24:16.903 clat (msec): min=10, max=123, avg=68.56, stdev=10.03 00:24:16.903 lat (msec): min=11, max=123, avg=69.64, stdev=10.63 00:24:16.903 clat percentiles (msec): 00:24:16.903 | 1.00th=[ 42], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 69], 00:24:16.903 | 30.00th=[ 70], 40.00th=[ 71], 50.00th=[ 71], 60.00th=[ 71], 00:24:16.903 | 70.00th=[ 72], 80.00th=[ 72], 90.00th=[ 74], 95.00th=[ 80], 00:24:16.903 | 99.00th=[ 92], 99.50th=[ 99], 99.90th=[ 121], 99.95th=[ 124], 00:24:16.903 | 99.99th=[ 125] 00:24:16.903 bw ( KiB/s): min=210944, max=329216, per=6.96%, avg=234880.00, stdev=32400.41, samples=20 00:24:16.903 iops : min= 824, max= 1286, avg=917.50, stdev=126.56, samples=20 00:24:16.903 lat (msec) : 20=0.41%, 50=10.44%, 100=88.81%, 250=0.35% 00:24:16.903 cpu : usr=0.38%, sys=3.42%, ctx=1736, majf=0, minf=3659 00:24:16.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:16.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:16.903 issued rwts: total=9238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.903 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:16.903 job8: (groupid=0, jobs=1): err= 0: pid=368693: Fri Dec 13 19:18:49 2024 00:24:16.903 read: IOPS=916, BW=229MiB/s (240MB/s)(2305MiB/10059msec) 00:24:16.903 slat (usec): min=12, max=41495, avg=1076.45, stdev=4202.96 00:24:16.903 clat (msec): min=10, max=127, avg=68.68, stdev=10.14 00:24:16.903 lat (msec): min=10, max=127, avg=69.76, stdev=11.01 00:24:16.903 clat percentiles (msec): 00:24:16.903 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 69], 00:24:16.903 | 30.00th=[ 70], 40.00th=[ 70], 50.00th=[ 71], 60.00th=[ 71], 00:24:16.903 | 70.00th=[ 72], 80.00th=[ 72], 90.00th=[ 74], 95.00th=[ 78], 00:24:16.903 | 99.00th=[ 102], 99.50th=[ 106], 99.90th=[ 126], 99.95th=[ 128], 00:24:16.903 | 99.99th=[ 128] 00:24:16.903 bw ( KiB/s): min=206848, max=333157, per=6.95%, avg=234385.85, stdev=31993.01, samples=20 00:24:16.903 iops : min= 808, max= 1301, avg=915.55, stdev=124.91, samples=20 00:24:16.903 lat (msec) : 20=0.40%, 50=10.40%, 100=88.07%, 250=1.13% 00:24:16.903 cpu : usr=0.33%, sys=3.57%, ctx=1743, majf=0, minf=4097 00:24:16.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:16.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:16.903 issued rwts: total=9220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.903 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:16.903 job9: (groupid=0, jobs=1): err= 0: pid=368694: Fri Dec 13 19:18:49 2024 00:24:16.903 read: IOPS=1785, BW=446MiB/s (468MB/s)(4483MiB/10042msec) 00:24:16.903 slat (usec): min=11, max=20956, avg=553.70, stdev=1486.77 00:24:16.903 clat (usec): min=1391, max=88249, avg=35251.54, stdev=12813.60 00:24:16.903 lat (usec): min=1424, max=88276, avg=35805.24, stdev=13062.33 00:24:16.903 clat percentiles (usec): 00:24:16.903 | 1.00th=[13829], 5.00th=[15270], 10.00th=[16319], 20.00th=[28443], 00:24:16.903 | 30.00th=[29230], 40.00th=[30016], 50.00th=[30540], 60.00th=[31851], 00:24:16.903 | 70.00th=[42730], 80.00th=[44827], 90.00th=[58459], 95.00th=[60031], 00:24:16.903 | 99.00th=[63177], 99.50th=[65799], 99.90th=[73925], 99.95th=[81265], 00:24:16.903 | 99.99th=[83362] 00:24:16.903 bw ( KiB/s): min=267264, max=972800, per=13.56%, avg=457473.90, stdev=165132.69, samples=20 00:24:16.903 iops : min= 1044, max= 3800, avg=1787.00, stdev=645.05, samples=20 00:24:16.903 lat (msec) : 2=0.06%, 4=0.09%, 20=10.99%, 50=74.96%, 100=13.90% 00:24:16.903 cpu : usr=0.53%, sys=6.53%, ctx=3361, majf=0, minf=4097 00:24:16.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:24:16.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:16.903 issued rwts: total=17931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.903 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:16.903 job10: (groupid=0, jobs=1): err= 0: pid=368695: Fri Dec 13 19:18:49 2024 00:24:16.904 read: IOPS=905, BW=226MiB/s (237MB/s)(2277MiB/10061msec) 00:24:16.904 slat (usec): min=14, max=28877, avg=1089.77, stdev=3060.25 00:24:16.904 clat (msec): min=12, max=127, avg=69.54, stdev= 8.63 00:24:16.904 lat (msec): min=12, max=127, avg=70.63, stdev= 9.18 00:24:16.904 clat percentiles (msec): 00:24:16.904 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 63], 20.00th=[ 70], 00:24:16.904 | 30.00th=[ 70], 40.00th=[ 71], 50.00th=[ 71], 60.00th=[ 71], 00:24:16.904 | 70.00th=[ 72], 80.00th=[ 72], 90.00th=[ 74], 95.00th=[ 79], 00:24:16.904 | 99.00th=[ 89], 99.50th=[ 93], 99.90th=[ 124], 99.95th=[ 126], 00:24:16.904 | 99.99th=[ 128] 00:24:16.904 bw ( KiB/s): min=208384, max=331264, per=6.87%, avg=231526.40, stdev=24976.72, samples=20 00:24:16.904 iops : min= 814, max= 1294, avg=904.40, stdev=97.57, samples=20 00:24:16.904 lat (msec) : 20=0.35%, 50=5.81%, 100=93.53%, 250=0.31% 00:24:16.904 cpu : usr=0.38%, sys=4.07%, ctx=1770, majf=0, minf=4097 00:24:16.904 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:16.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:16.904 issued rwts: total=9107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.904 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:16.904 00:24:16.904 Run status group 0 (all jobs): 00:24:16.904 READ: bw=3293MiB/s (3453MB/s), 226MiB/s-446MiB/s (237MB/s-468MB/s), io=32.4GiB (34.8GB), run=10041-10063msec 00:24:16.904 00:24:16.904 Disk stats (read/write): 00:24:16.904 nvme0n1: ios=28269/0, merge=0/0, ticks=1223492/0, in_queue=1223492, util=96.95% 00:24:16.904 nvme10n1: ios=18270/0, merge=0/0, ticks=1221327/0, in_queue=1221327, util=97.17% 00:24:16.904 nvme1n1: ios=32313/0, merge=0/0, ticks=1221139/0, in_queue=1221139, util=97.50% 00:24:16.904 nvme2n1: ios=23046/0, merge=0/0, ticks=1224398/0, in_queue=1224398, util=97.66% 00:24:16.904 nvme3n1: ios=23373/0, merge=0/0, ticks=1223757/0, in_queue=1223757, util=97.76% 00:24:16.904 nvme4n1: ios=18048/0, merge=0/0, ticks=1223936/0, in_queue=1223936, util=98.17% 00:24:16.904 nvme5n1: ios=28242/0, merge=0/0, ticks=1222273/0, in_queue=1222273, util=98.35% 00:24:16.904 nvme6n1: ios=18177/0, merge=0/0, ticks=1223752/0, in_queue=1223752, util=98.49% 00:24:16.904 nvme7n1: ios=18161/0, merge=0/0, ticks=1225923/0, in_queue=1225923, util=98.93% 00:24:16.904 nvme8n1: ios=35464/0, merge=0/0, ticks=1222803/0, in_queue=1222803, util=99.15% 00:24:16.904 nvme9n1: ios=17916/0, merge=0/0, ticks=1224851/0, in_queue=1224851, util=99.30% 00:24:16.904 19:18:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:16.904 [global] 00:24:16.904 thread=1 00:24:16.904 invalidate=1 00:24:16.904 rw=randwrite 00:24:16.904 time_based=1 00:24:16.904 runtime=10 00:24:16.904 ioengine=libaio 00:24:16.904 direct=1 00:24:16.904 bs=262144 00:24:16.904 iodepth=64 00:24:16.904 norandommap=1 00:24:16.904 numjobs=1 00:24:16.904 00:24:16.904 [job0] 00:24:16.904 filename=/dev/nvme0n1 00:24:16.904 [job1] 00:24:16.904 filename=/dev/nvme10n1 00:24:16.904 [job2] 00:24:16.904 filename=/dev/nvme1n1 00:24:16.904 [job3] 00:24:16.904 filename=/dev/nvme2n1 00:24:16.904 [job4] 00:24:16.904 filename=/dev/nvme3n1 00:24:16.904 [job5] 00:24:16.904 filename=/dev/nvme4n1 00:24:16.904 [job6] 00:24:16.904 filename=/dev/nvme5n1 00:24:16.904 [job7] 00:24:16.904 filename=/dev/nvme6n1 00:24:16.904 [job8] 00:24:16.904 filename=/dev/nvme7n1 00:24:16.904 [job9] 00:24:16.904 filename=/dev/nvme8n1 00:24:16.904 [job10] 00:24:16.904 filename=/dev/nvme9n1 00:24:16.904 Could not set queue depth (nvme0n1) 00:24:16.904 Could not set queue depth (nvme10n1) 00:24:16.904 Could not set queue depth (nvme1n1) 00:24:16.904 Could not set queue depth (nvme2n1) 00:24:16.904 Could not set queue depth (nvme3n1) 00:24:16.904 Could not set queue depth (nvme4n1) 00:24:16.904 Could not set queue depth (nvme5n1) 00:24:16.904 Could not set queue depth (nvme6n1) 00:24:16.904 Could not set queue depth (nvme7n1) 00:24:16.904 Could not set queue depth (nvme8n1) 00:24:16.904 Could not set queue depth (nvme9n1) 00:24:16.904 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:16.904 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:16.904 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:16.904 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:16.904 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:16.904 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:16.904 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:16.904 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:16.904 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:16.904 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:16.904 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:16.904 fio-3.35 00:24:16.904 Starting 11 threads 00:24:26.886 00:24:26.886 job0: (groupid=0, jobs=1): err= 0: pid=370424: Fri Dec 13 19:19:00 2024 00:24:26.886 write: IOPS=763, BW=191MiB/s (200MB/s)(1915MiB/10031msec); 0 zone resets 00:24:26.886 slat (usec): min=24, max=24395, avg=1277.87, stdev=2486.01 00:24:26.886 clat (msec): min=3, max=129, avg=82.51, stdev=19.36 00:24:26.886 lat (msec): min=3, max=130, avg=83.79, stdev=19.68 00:24:26.886 clat percentiles (msec): 00:24:26.886 | 1.00th=[ 22], 5.00th=[ 40], 10.00th=[ 68], 20.00th=[ 73], 00:24:26.886 | 30.00th=[ 74], 40.00th=[ 75], 50.00th=[ 86], 60.00th=[ 89], 00:24:26.886 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 112], 00:24:26.886 | 99.00th=[ 120], 99.50th=[ 123], 99.90th=[ 127], 99.95th=[ 129], 00:24:26.886 | 99.99th=[ 130] 00:24:26.886 bw ( KiB/s): min=141312, max=322560, per=5.67%, avg=194483.20, stdev=41120.79, samples=20 00:24:26.886 iops : min= 552, max= 1260, avg=759.70, stdev=160.63, samples=20 00:24:26.886 lat (msec) : 4=0.07%, 10=0.22%, 20=0.59%, 50=5.97%, 100=78.02% 00:24:26.886 lat (msec) : 250=15.14% 00:24:26.886 cpu : usr=1.83%, sys=3.37%, ctx=1990, majf=0, minf=204 00:24:26.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:26.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:26.886 issued rwts: total=0,7660,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.886 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:26.886 job1: (groupid=0, jobs=1): err= 0: pid=370436: Fri Dec 13 19:19:00 2024 00:24:26.886 write: IOPS=878, BW=220MiB/s (230MB/s)(2207MiB/10049msec); 0 zone resets 00:24:26.886 slat (usec): min=24, max=60113, avg=1104.04, stdev=2272.28 00:24:26.886 clat (msec): min=15, max=173, avg=71.73, stdev=18.47 00:24:26.886 lat (msec): min=15, max=174, avg=72.83, stdev=18.74 00:24:26.886 clat percentiles (msec): 00:24:26.886 | 1.00th=[ 36], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 56], 00:24:26.886 | 30.00th=[ 57], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 74], 00:24:26.886 | 70.00th=[ 75], 80.00th=[ 87], 90.00th=[ 104], 95.00th=[ 109], 00:24:26.886 | 99.00th=[ 121], 99.50th=[ 126], 99.90th=[ 140], 99.95th=[ 142], 00:24:26.886 | 99.99th=[ 174] 00:24:26.886 bw ( KiB/s): min=143872, max=294912, per=6.54%, avg=224358.40, stdev=48334.42, samples=20 00:24:26.886 iops : min= 562, max= 1152, avg=876.40, stdev=188.81, samples=20 00:24:26.886 lat (msec) : 20=0.17%, 50=2.76%, 100=85.88%, 250=11.18% 00:24:26.886 cpu : usr=2.01%, sys=3.76%, ctx=2265, majf=0, minf=8 00:24:26.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:26.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:26.886 issued rwts: total=0,8827,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.886 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:26.886 job2: (groupid=0, jobs=1): err= 0: pid=370437: Fri Dec 13 19:19:00 2024 00:24:26.886 write: IOPS=880, BW=220MiB/s (231MB/s)(2212MiB/10051msec); 0 zone resets 00:24:26.886 slat (usec): min=22, max=18108, avg=1107.44, stdev=2141.42 00:24:26.886 clat (msec): min=8, max=128, avg=71.57, stdev=21.44 00:24:26.886 lat (msec): min=8, max=136, avg=72.68, stdev=21.76 00:24:26.886 clat percentiles (msec): 00:24:26.886 | 1.00th=[ 28], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 55], 00:24:26.886 | 30.00th=[ 57], 40.00th=[ 70], 50.00th=[ 75], 60.00th=[ 77], 00:24:26.886 | 70.00th=[ 84], 80.00th=[ 92], 90.00th=[ 96], 95.00th=[ 108], 00:24:26.886 | 99.00th=[ 118], 99.50th=[ 122], 99.90th=[ 126], 99.95th=[ 126], 00:24:26.886 | 99.99th=[ 129] 00:24:26.886 bw ( KiB/s): min=139264, max=403456, per=6.56%, avg=224896.00, stdev=72126.11, samples=20 00:24:26.886 iops : min= 544, max= 1576, avg=878.50, stdev=281.74, samples=20 00:24:26.886 lat (msec) : 10=0.02%, 20=0.28%, 50=13.26%, 100=80.03%, 250=6.41% 00:24:26.886 cpu : usr=2.06%, sys=3.81%, ctx=2260, majf=0, minf=148 00:24:26.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:26.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:26.886 issued rwts: total=0,8848,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.886 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:26.886 job3: (groupid=0, jobs=1): err= 0: pid=370438: Fri Dec 13 19:19:00 2024 00:24:26.886 write: IOPS=962, BW=241MiB/s (252MB/s)(2417MiB/10049msec); 0 zone resets 00:24:26.886 slat (usec): min=21, max=17388, avg=1029.31, stdev=1980.92 00:24:26.886 clat (msec): min=16, max=124, avg=65.47, stdev=19.22 00:24:26.886 lat (msec): min=16, max=124, avg=66.50, stdev=19.50 00:24:26.886 clat percentiles (msec): 00:24:26.886 | 1.00th=[ 32], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 54], 00:24:26.886 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 68], 60.00th=[ 72], 00:24:26.886 | 70.00th=[ 74], 80.00th=[ 77], 90.00th=[ 90], 95.00th=[ 105], 00:24:26.886 | 99.00th=[ 112], 99.50th=[ 116], 99.90th=[ 122], 99.95th=[ 122], 00:24:26.886 | 99.99th=[ 125] 00:24:26.886 bw ( KiB/s): min=153088, max=486912, per=7.17%, avg=245888.00, stdev=76037.53, samples=20 00:24:26.886 iops : min= 598, max= 1902, avg=960.50, stdev=297.02, samples=20 00:24:26.886 lat (msec) : 20=0.05%, 50=14.63%, 100=78.49%, 250=6.84% 00:24:26.886 cpu : usr=2.32%, sys=4.12%, ctx=2391, majf=0, minf=133 00:24:26.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:26.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:26.886 issued rwts: total=0,9668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.886 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:26.886 job4: (groupid=0, jobs=1): err= 0: pid=370439: Fri Dec 13 19:19:00 2024 00:24:26.886 write: IOPS=1165, BW=291MiB/s (306MB/s)(2922MiB/10028msec); 0 zone resets 00:24:26.886 slat (usec): min=21, max=57027, avg=821.06, stdev=1921.03 00:24:26.886 clat (msec): min=8, max=146, avg=54.07, stdev=27.19 00:24:26.886 lat (msec): min=8, max=159, avg=54.89, stdev=27.60 00:24:26.886 clat percentiles (msec): 00:24:26.886 | 1.00th=[ 22], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 36], 00:24:26.886 | 30.00th=[ 37], 40.00th=[ 38], 50.00th=[ 39], 60.00th=[ 40], 00:24:26.886 | 70.00th=[ 63], 80.00th=[ 92], 90.00th=[ 101], 95.00th=[ 108], 00:24:26.886 | 99.00th=[ 117], 99.50th=[ 120], 99.90th=[ 125], 99.95th=[ 128], 00:24:26.886 | 99.99th=[ 146] 00:24:26.886 bw ( KiB/s): min=151040, max=440832, per=8.68%, avg=297625.60, stdev=126053.73, samples=20 00:24:26.886 iops : min= 590, max= 1722, avg=1162.60, stdev=492.40, samples=20 00:24:26.886 lat (msec) : 10=0.01%, 20=0.78%, 50=67.15%, 100=22.09%, 250=9.98% 00:24:26.886 cpu : usr=2.46%, sys=4.22%, ctx=2934, majf=0, minf=12 00:24:26.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:24:26.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:26.886 issued rwts: total=0,11689,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.886 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:26.886 job5: (groupid=0, jobs=1): err= 0: pid=370440: Fri Dec 13 19:19:00 2024 00:24:26.886 write: IOPS=1561, BW=390MiB/s (409MB/s)(3924MiB/10052msec); 0 zone resets 00:24:26.886 slat (usec): min=17, max=23887, avg=623.65, stdev=1451.24 00:24:26.886 clat (usec): min=606, max=130726, avg=40349.77, stdev=26791.64 00:24:26.886 lat (usec): min=677, max=134475, avg=40973.43, stdev=27187.25 00:24:26.886 clat percentiles (msec): 00:24:26.886 | 1.00th=[ 11], 5.00th=[ 17], 10.00th=[ 18], 20.00th=[ 18], 00:24:26.886 | 30.00th=[ 18], 40.00th=[ 19], 50.00th=[ 34], 60.00th=[ 39], 00:24:26.886 | 70.00th=[ 56], 80.00th=[ 71], 90.00th=[ 77], 95.00th=[ 85], 00:24:26.886 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 126], 99.95th=[ 127], 00:24:26.886 | 99.99th=[ 131] 00:24:26.886 bw ( KiB/s): min=138240, max=909824, per=11.67%, avg=400179.20, stdev=271534.87, samples=20 00:24:26.886 iops : min= 540, max= 3554, avg=1563.20, stdev=1060.68, samples=20 00:24:26.886 lat (usec) : 750=0.03%, 1000=0.01% 00:24:26.886 lat (msec) : 2=0.13%, 4=0.18%, 10=0.54%, 20=45.62%, 50=17.71% 00:24:26.886 lat (msec) : 100=32.32%, 250=3.47% 00:24:26.886 cpu : usr=2.64%, sys=4.75%, ctx=3681, majf=0, minf=138 00:24:26.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:24:26.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:26.886 issued rwts: total=0,15695,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.886 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:26.886 job6: (groupid=0, jobs=1): err= 0: pid=370441: Fri Dec 13 19:19:00 2024 00:24:26.886 write: IOPS=963, BW=241MiB/s (253MB/s)(2421MiB/10051msec); 0 zone resets 00:24:26.886 slat (usec): min=21, max=20877, avg=1027.43, stdev=1992.46 00:24:26.886 clat (msec): min=2, max=127, avg=65.38, stdev=19.49 00:24:26.886 lat (msec): min=2, max=127, avg=66.41, stdev=19.78 00:24:26.887 clat percentiles (msec): 00:24:26.887 | 1.00th=[ 32], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 54], 00:24:26.887 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 68], 60.00th=[ 72], 00:24:26.887 | 70.00th=[ 74], 80.00th=[ 77], 90.00th=[ 91], 95.00th=[ 105], 00:24:26.887 | 99.00th=[ 113], 99.50th=[ 118], 99.90th=[ 124], 99.95th=[ 125], 00:24:26.887 | 99.99th=[ 128] 00:24:26.887 bw ( KiB/s): min=150528, max=486912, per=7.18%, avg=246297.60, stdev=76164.55, samples=20 00:24:26.887 iops : min= 588, max= 1902, avg=962.10, stdev=297.52, samples=20 00:24:26.887 lat (msec) : 4=0.02%, 10=0.08%, 20=0.12%, 50=14.64%, 100=78.34% 00:24:26.887 lat (msec) : 250=6.79% 00:24:26.887 cpu : usr=2.15%, sys=4.45%, ctx=2388, majf=0, minf=77 00:24:26.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:26.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:26.887 issued rwts: total=0,9684,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.887 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:26.887 job7: (groupid=0, jobs=1): err= 0: pid=370442: Fri Dec 13 19:19:00 2024 00:24:26.887 write: IOPS=748, BW=187MiB/s (196MB/s)(1881MiB/10047msec); 0 zone resets 00:24:26.887 slat (usec): min=26, max=19605, avg=1281.26, stdev=2515.12 00:24:26.887 clat (usec): min=254, max=128802, avg=84160.01, stdev=20630.21 00:24:26.887 lat (usec): min=307, max=131633, avg=85441.27, stdev=20925.19 00:24:26.887 clat percentiles (msec): 00:24:26.887 | 1.00th=[ 4], 5.00th=[ 62], 10.00th=[ 72], 20.00th=[ 75], 00:24:26.887 | 30.00th=[ 77], 40.00th=[ 79], 50.00th=[ 87], 60.00th=[ 90], 00:24:26.887 | 70.00th=[ 94], 80.00th=[ 97], 90.00th=[ 107], 95.00th=[ 113], 00:24:26.887 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 129], 99.95th=[ 129], 00:24:26.887 | 99.99th=[ 129] 00:24:26.887 bw ( KiB/s): min=139264, max=332288, per=5.57%, avg=190976.00, stdev=41643.48, samples=20 00:24:26.887 iops : min= 544, max= 1298, avg=746.00, stdev=162.67, samples=20 00:24:26.887 lat (usec) : 500=0.08%, 750=0.05%, 1000=0.04% 00:24:26.887 lat (msec) : 2=0.35%, 4=0.58%, 10=1.70%, 20=1.17%, 50=0.56% 00:24:26.887 lat (msec) : 100=79.20%, 250=16.27% 00:24:26.887 cpu : usr=1.71%, sys=3.41%, ctx=2040, majf=0, minf=77 00:24:26.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:26.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:26.887 issued rwts: total=0,7523,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.887 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:26.887 job8: (groupid=0, jobs=1): err= 0: pid=370443: Fri Dec 13 19:19:00 2024 00:24:26.887 write: IOPS=2613, BW=653MiB/s (685MB/s)(6558MiB/10036msec); 0 zone resets 00:24:26.887 slat (usec): min=17, max=13135, avg=374.56, stdev=810.85 00:24:26.887 clat (usec): min=1002, max=96027, avg=24106.88, stdev=12628.61 00:24:26.887 lat (usec): min=1055, max=96076, avg=24481.44, stdev=12814.71 00:24:26.887 clat percentiles (usec): 00:24:26.887 | 1.00th=[16057], 5.00th=[16712], 10.00th=[16909], 20.00th=[17433], 00:24:26.887 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18220], 60.00th=[18482], 00:24:26.887 | 70.00th=[19268], 80.00th=[35390], 90.00th=[37487], 95.00th=[43779], 00:24:26.887 | 99.00th=[85459], 99.50th=[88605], 99.90th=[92799], 99.95th=[93848], 00:24:26.887 | 99.99th=[95945] 00:24:26.887 bw ( KiB/s): min=194048, max=914944, per=19.53%, avg=669875.20, stdev=242353.62, samples=20 00:24:26.887 iops : min= 758, max= 3574, avg=2616.70, stdev=946.69, samples=20 00:24:26.887 lat (msec) : 2=0.07%, 4=0.06%, 10=0.21%, 20=72.01%, 50=24.27% 00:24:26.887 lat (msec) : 100=3.38% 00:24:26.887 cpu : usr=4.04%, sys=6.13%, ctx=5603, majf=0, minf=469 00:24:26.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:26.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:26.887 issued rwts: total=0,26230,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.887 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:26.887 job9: (groupid=0, jobs=1): err= 0: pid=370445: Fri Dec 13 19:19:00 2024 00:24:26.887 write: IOPS=1999, BW=500MiB/s (524MB/s)(5012MiB/10028msec); 0 zone resets 00:24:26.887 slat (usec): min=15, max=6357, avg=493.07, stdev=963.11 00:24:26.887 clat (usec): min=4653, max=66866, avg=31513.02, stdev=9036.25 00:24:26.887 lat (usec): min=4701, max=66917, avg=32006.09, stdev=9160.19 00:24:26.887 clat percentiles (usec): 00:24:26.887 | 1.00th=[16909], 5.00th=[17433], 10.00th=[17957], 20.00th=[18744], 00:24:26.887 | 30.00th=[26608], 40.00th=[34341], 50.00th=[35390], 60.00th=[36439], 00:24:26.887 | 70.00th=[36963], 80.00th=[38011], 90.00th=[38536], 95.00th=[40109], 00:24:26.887 | 99.00th=[50594], 99.50th=[52167], 99.90th=[55837], 99.95th=[58983], 00:24:26.887 | 99.99th=[63701] 00:24:26.887 bw ( KiB/s): min=366592, max=884736, per=14.92%, avg=511564.80, stdev=161132.17, samples=20 00:24:26.887 iops : min= 1432, max= 3456, avg=1998.30, stdev=629.42, samples=20 00:24:26.887 lat (msec) : 10=0.05%, 20=27.81%, 50=70.97%, 100=1.17% 00:24:26.887 cpu : usr=3.66%, sys=5.66%, ctx=4471, majf=0, minf=14 00:24:26.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:24:26.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:26.887 issued rwts: total=0,20046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.887 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:26.887 job10: (groupid=0, jobs=1): err= 0: pid=370447: Fri Dec 13 19:19:00 2024 00:24:26.887 write: IOPS=875, BW=219MiB/s (230MB/s)(2200MiB/10052msec); 0 zone resets 00:24:26.887 slat (usec): min=23, max=17012, avg=1117.90, stdev=2103.03 00:24:26.887 clat (msec): min=14, max=128, avg=71.95, stdev=21.02 00:24:26.887 lat (msec): min=14, max=133, avg=73.07, stdev=21.33 00:24:26.887 clat percentiles (msec): 00:24:26.887 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 39], 20.00th=[ 55], 00:24:26.887 | 30.00th=[ 57], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 77], 00:24:26.887 | 70.00th=[ 85], 80.00th=[ 92], 90.00th=[ 97], 95.00th=[ 108], 00:24:26.887 | 99.00th=[ 118], 99.50th=[ 122], 99.90th=[ 126], 99.95th=[ 127], 00:24:26.887 | 99.99th=[ 129] 00:24:26.887 bw ( KiB/s): min=139776, max=380416, per=6.52%, avg=223692.80, stdev=67819.36, samples=20 00:24:26.887 iops : min= 546, max= 1486, avg=873.80, stdev=264.92, samples=20 00:24:26.887 lat (msec) : 20=0.14%, 50=13.63%, 100=79.93%, 250=6.29% 00:24:26.887 cpu : usr=2.16%, sys=3.72%, ctx=2237, majf=0, minf=79 00:24:26.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:26.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:26.887 issued rwts: total=0,8801,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.887 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:26.887 00:24:26.887 Run status group 0 (all jobs): 00:24:26.887 WRITE: bw=3349MiB/s (3512MB/s), 187MiB/s-653MiB/s (196MB/s-685MB/s), io=32.9GiB (35.3GB), run=10028-10052msec 00:24:26.887 00:24:26.887 Disk stats (read/write): 00:24:26.887 nvme0n1: ios=49/14866, merge=0/0, ticks=15/1214333, in_queue=1214348, util=96.65% 00:24:26.887 nvme10n1: ios=0/17317, merge=0/0, ticks=0/1212135, in_queue=1212135, util=96.82% 00:24:26.887 nvme1n1: ios=0/17202, merge=0/0, ticks=0/1215709, in_queue=1215709, util=97.16% 00:24:26.887 nvme2n1: ios=0/19002, merge=0/0, ticks=0/1210395, in_queue=1210395, util=97.33% 00:24:26.887 nvme3n1: ios=0/22767, merge=0/0, ticks=0/1217054, in_queue=1217054, util=97.42% 00:24:26.887 nvme4n1: ios=0/30936, merge=0/0, ticks=0/1215793, in_queue=1215793, util=97.80% 00:24:26.887 nvme5n1: ios=0/19026, merge=0/0, ticks=0/1210680, in_queue=1210680, util=97.99% 00:24:26.887 nvme6n1: ios=0/14710, merge=0/0, ticks=0/1213275, in_queue=1213275, util=98.08% 00:24:26.887 nvme7n1: ios=0/51997, merge=0/0, ticks=0/1215811, in_queue=1215811, util=98.63% 00:24:26.887 nvme8n1: ios=0/39504, merge=0/0, ticks=0/1219576, in_queue=1219576, util=98.82% 00:24:26.887 nvme9n1: ios=0/17042, merge=0/0, ticks=0/1213162, in_queue=1213162, util=99.00% 00:24:26.887 19:19:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:24:26.887 19:19:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:24:26.887 19:19:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:26.887 19:19:00 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:27.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:27.147 19:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:27.147 19:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:24:27.147 19:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:27.147 19:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:24:27.147 19:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:27.147 19:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:24:27.147 19:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:24:27.147 19:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:27.147 19:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.147 19:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:27.147 19:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.147 19:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:27.147 19:19:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:28.090 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:28.090 19:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:28.090 19:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:24:28.090 19:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:28.090 19:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:24:28.090 19:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:28.090 19:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:24:28.090 19:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:24:28.090 19:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:28.091 19:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.091 19:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:28.091 19:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.091 19:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:28.091 19:19:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:29.027 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:29.027 19:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:29.027 19:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:24:29.027 19:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:29.027 19:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:24:29.027 19:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:29.027 19:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:24:29.027 19:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:24:29.027 19:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:29.027 19:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.027 19:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:29.027 19:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.027 19:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:29.027 19:19:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:29.964 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:29.964 19:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:29.964 19:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:24:29.964 19:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:29.964 19:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:24:30.223 19:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:24:30.223 19:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:30.223 19:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:24:30.223 19:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:30.223 19:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.223 19:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:30.223 19:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.223 19:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:30.223 19:19:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:31.158 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:31.158 19:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:31.158 19:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:24:31.158 19:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:31.158 19:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:24:31.158 19:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:31.158 19:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:24:31.158 19:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:24:31.158 19:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:31.158 19:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.158 19:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:31.158 19:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.158 19:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.158 19:19:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:32.093 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:32.093 19:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:32.093 19:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:24:32.093 19:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:32.093 19:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:24:32.093 19:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:32.093 19:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:24:32.093 19:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:24:32.093 19:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:32.093 19:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.093 19:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:32.093 19:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.093 19:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:32.093 19:19:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:33.030 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:33.030 19:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:33.030 19:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:24:33.030 19:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:33.030 19:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:24:33.030 19:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:33.030 19:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:24:33.030 19:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:24:33.030 19:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:33.030 19:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.030 19:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.030 19:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.030 19:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:33.030 19:19:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:33.967 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:33.967 19:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:33.967 19:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:24:33.967 19:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:33.967 19:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:24:33.967 19:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:33.967 19:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:24:33.967 19:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:24:33.967 19:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:33.967 19:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.967 19:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.967 19:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.967 19:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:33.967 19:19:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:35.345 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:35.345 19:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:35.345 19:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:24:35.345 19:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:35.345 19:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:24:35.345 19:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:35.345 19:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:24:35.345 19:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:24:35.345 19:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:35.345 19:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.345 19:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:35.345 19:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.345 19:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:35.345 19:19:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:35.912 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:35.912 19:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:35.912 19:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:24:35.912 19:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:35.912 19:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:24:36.171 19:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:36.171 19:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:24:36.171 19:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:24:36.171 19:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:36.171 19:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.171 19:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:36.171 19:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.171 19:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.171 19:19:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:37.108 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:37.108 rmmod nvme_rdma 00:24:37.108 rmmod nvme_fabrics 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 362111 ']' 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 362111 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 362111 ']' 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 362111 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 362111 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 362111' 00:24:37.108 killing process with pid 362111 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 362111 00:24:37.108 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 362111 00:24:37.678 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:37.678 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:24:37.678 00:24:37.678 real 1m15.331s 00:24:37.678 user 4m50.993s 00:24:37.678 sys 0m19.035s 00:24:37.678 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:37.678 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:37.678 ************************************ 00:24:37.678 END TEST nvmf_multiconnection 00:24:37.678 ************************************ 00:24:37.678 19:19:11 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:24:37.678 19:19:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:37.678 19:19:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:37.678 19:19:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:37.678 ************************************ 00:24:37.678 START TEST nvmf_initiator_timeout 00:24:37.678 ************************************ 00:24:37.678 19:19:11 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:24:37.938 * Looking for test storage... 00:24:37.938 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:37.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.938 --rc genhtml_branch_coverage=1 00:24:37.938 --rc genhtml_function_coverage=1 00:24:37.938 --rc genhtml_legend=1 00:24:37.938 --rc geninfo_all_blocks=1 00:24:37.938 --rc geninfo_unexecuted_blocks=1 00:24:37.938 00:24:37.938 ' 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:37.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.938 --rc genhtml_branch_coverage=1 00:24:37.938 --rc genhtml_function_coverage=1 00:24:37.938 --rc genhtml_legend=1 00:24:37.938 --rc geninfo_all_blocks=1 00:24:37.938 --rc geninfo_unexecuted_blocks=1 00:24:37.938 00:24:37.938 ' 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:37.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.938 --rc genhtml_branch_coverage=1 00:24:37.938 --rc genhtml_function_coverage=1 00:24:37.938 --rc genhtml_legend=1 00:24:37.938 --rc geninfo_all_blocks=1 00:24:37.938 --rc geninfo_unexecuted_blocks=1 00:24:37.938 00:24:37.938 ' 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:37.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.938 --rc genhtml_branch_coverage=1 00:24:37.938 --rc genhtml_function_coverage=1 00:24:37.938 --rc genhtml_legend=1 00:24:37.938 --rc geninfo_all_blocks=1 00:24:37.938 --rc geninfo_unexecuted_blocks=1 00:24:37.938 00:24:37.938 ' 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.938 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:37.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:24:37.939 19:19:12 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:46.068 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:46.068 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:46.069 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:46.069 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:46.069 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # rdma_device_init 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # uname 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@66 -- # modprobe ib_cm 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@67 -- # modprobe ib_core 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@68 -- # modprobe ib_umad 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@70 -- # modprobe iw_cm 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@530 -- # allocate_nic_ips 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # get_rdma_if_list 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:24:46.069 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:46.069 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:46.069 altname enp217s0f0np0 00:24:46.069 altname ens818f0np0 00:24:46.069 inet 192.168.100.8/24 scope global mlx_0_0 00:24:46.069 valid_lft forever preferred_lft forever 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:24:46.069 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:46.069 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:46.069 altname enp217s0f1np1 00:24:46.069 altname ens818f1np1 00:24:46.069 inet 192.168.100.9/24 scope global mlx_0_1 00:24:46.069 valid_lft forever preferred_lft forever 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # get_rdma_if_list 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:46.069 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:24:46.070 192.168.100.9' 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:24:46.070 192.168.100.9' 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # head -n 1 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:24:46.070 192.168.100.9' 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # tail -n +2 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # head -n 1 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=377276 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 377276 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 377276 ']' 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:46.070 [2024-12-13 19:19:19.486057] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:24:46.070 [2024-12-13 19:19:19.486106] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.070 [2024-12-13 19:19:19.576986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:46.070 [2024-12-13 19:19:19.599355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.070 [2024-12-13 19:19:19.599393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.070 [2024-12-13 19:19:19.599404] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.070 [2024-12-13 19:19:19.599412] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.070 [2024-12-13 19:19:19.599420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.070 [2024-12-13 19:19:19.601095] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.070 [2024-12-13 19:19:19.601140] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:24:46.070 [2024-12-13 19:19:19.601247] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.070 [2024-12-13 19:19:19.601249] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:46.070 Malloc0 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:46.070 Delay0 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:46.070 [2024-12-13 19:19:19.809218] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdb1670/0xc8f080) succeed. 00:24:46.070 [2024-12-13 19:19:19.818674] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdb2cb0/0xcd0720) succeed. 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:46.070 [2024-12-13 19:19:19.965587] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.070 19:19:19 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:24:46.638 19:19:20 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:24:46.638 19:19:20 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:24:46.638 19:19:20 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:46.638 19:19:20 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:46.638 19:19:20 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:24:49.168 19:19:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:49.168 19:19:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:49.168 19:19:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:24:49.168 19:19:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:49.168 19:19:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:49.168 19:19:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:24:49.168 19:19:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=377978 00:24:49.168 19:19:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:24:49.168 19:19:22 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:24:49.168 [global] 00:24:49.168 thread=1 00:24:49.168 invalidate=1 00:24:49.168 rw=write 00:24:49.168 time_based=1 00:24:49.168 runtime=60 00:24:49.168 ioengine=libaio 00:24:49.168 direct=1 00:24:49.168 bs=4096 00:24:49.168 iodepth=1 00:24:49.168 norandommap=0 00:24:49.168 numjobs=1 00:24:49.168 00:24:49.168 verify_dump=1 00:24:49.168 verify_backlog=512 00:24:49.168 verify_state_save=0 00:24:49.168 do_verify=1 00:24:49.168 verify=crc32c-intel 00:24:49.168 [job0] 00:24:49.168 filename=/dev/nvme0n1 00:24:49.168 Could not set queue depth (nvme0n1) 00:24:49.168 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:49.168 fio-3.35 00:24:49.168 Starting 1 thread 00:24:51.701 19:19:25 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:24:51.701 19:19:25 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.701 19:19:25 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:51.701 true 00:24:51.701 19:19:25 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.701 19:19:25 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:24:51.701 19:19:25 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.701 19:19:25 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:51.701 true 00:24:51.701 19:19:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.701 19:19:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:24:51.701 19:19:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.701 19:19:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:51.701 true 00:24:51.701 19:19:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.701 19:19:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:24:51.701 19:19:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.701 19:19:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:51.701 true 00:24:51.701 19:19:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.701 19:19:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:24:55.234 19:19:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:24:55.234 19:19:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.234 19:19:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:55.234 true 00:24:55.234 19:19:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.234 19:19:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:24:55.234 19:19:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.234 19:19:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:55.234 true 00:24:55.234 19:19:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.234 19:19:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:24:55.234 19:19:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.234 19:19:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:55.234 true 00:24:55.234 19:19:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.234 19:19:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:24:55.234 19:19:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.234 19:19:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:55.234 true 00:24:55.234 19:19:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.234 19:19:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:24:55.234 19:19:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 377978 00:25:51.540 00:25:51.540 job0: (groupid=0, jobs=1): err= 0: pid=378140: Fri Dec 13 19:20:23 2024 00:25:51.540 read: IOPS=1285, BW=5141KiB/s (5264kB/s)(301MiB/60000msec) 00:25:51.540 slat (usec): min=8, max=281, avg= 9.05, stdev= 1.41 00:25:51.540 clat (usec): min=37, max=277, avg=103.14, stdev= 6.48 00:25:51.540 lat (usec): min=95, max=318, avg=112.19, stdev= 6.58 00:25:51.540 clat percentiles (usec): 00:25:51.540 | 1.00th=[ 92], 5.00th=[ 94], 10.00th=[ 96], 20.00th=[ 98], 00:25:51.540 | 30.00th=[ 100], 40.00th=[ 101], 50.00th=[ 103], 60.00th=[ 104], 00:25:51.540 | 70.00th=[ 106], 80.00th=[ 109], 90.00th=[ 112], 95.00th=[ 115], 00:25:51.540 | 99.00th=[ 120], 99.50th=[ 122], 99.90th=[ 128], 99.95th=[ 137], 00:25:51.540 | 99.99th=[ 229] 00:25:51.540 write: IOPS=1288, BW=5154KiB/s (5278kB/s)(302MiB/60000msec); 0 zone resets 00:25:51.540 slat (usec): min=10, max=10844, avg=11.93, stdev=39.16 00:25:51.540 clat (usec): min=37, max=42246k, avg=647.07, stdev=151936.29 00:25:51.540 lat (usec): min=94, max=42246k, avg=659.00, stdev=151936.29 00:25:51.540 clat percentiles (usec): 00:25:51.540 | 1.00th=[ 89], 5.00th=[ 92], 10.00th=[ 93], 20.00th=[ 95], 00:25:51.540 | 30.00th=[ 97], 40.00th=[ 99], 50.00th=[ 100], 60.00th=[ 102], 00:25:51.540 | 70.00th=[ 104], 80.00th=[ 106], 90.00th=[ 110], 95.00th=[ 112], 00:25:51.540 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 133], 99.95th=[ 155], 00:25:51.540 | 99.99th=[ 260] 00:25:51.540 bw ( KiB/s): min= 3544, max=19208, per=100.00%, avg=16761.33, stdev=3086.88, samples=36 00:25:51.540 iops : min= 886, max= 4802, avg=4190.33, stdev=771.72, samples=36 00:25:51.540 lat (usec) : 50=0.01%, 100=40.70%, 250=59.29%, 500=0.01% 00:25:51.540 lat (msec) : >=2000=0.01% 00:25:51.540 cpu : usr=1.97%, sys=3.51%, ctx=154428, majf=0, minf=143 00:25:51.540 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:51.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:51.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:51.540 issued rwts: total=77111,77312,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:51.540 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:51.540 00:25:51.540 Run status group 0 (all jobs): 00:25:51.541 READ: bw=5141KiB/s (5264kB/s), 5141KiB/s-5141KiB/s (5264kB/s-5264kB/s), io=301MiB (316MB), run=60000-60000msec 00:25:51.541 WRITE: bw=5154KiB/s (5278kB/s), 5154KiB/s-5154KiB/s (5278kB/s-5278kB/s), io=302MiB (317MB), run=60000-60000msec 00:25:51.541 00:25:51.541 Disk stats (read/write): 00:25:51.541 nvme0n1: ios=77060/76800, merge=0/0, ticks=7276/7233, in_queue=14509, util=99.62% 00:25:51.541 19:20:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:51.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:25:51.541 nvmf hotplug test: fio successful as expected 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:51.541 rmmod nvme_rdma 00:25:51.541 rmmod nvme_fabrics 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 377276 ']' 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 377276 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 377276 ']' 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 377276 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 377276 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 377276' 00:25:51.541 killing process with pid 377276 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 377276 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 377276 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:25:51.541 00:25:51.541 real 1m12.872s 00:25:51.541 user 4m31.685s 00:25:51.541 sys 0m8.304s 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:51.541 ************************************ 00:25:51.541 END TEST nvmf_initiator_timeout 00:25:51.541 ************************************ 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:51.541 ************************************ 00:25:51.541 START TEST nvmf_srq_overwhelm 00:25:51.541 ************************************ 00:25:51.541 19:20:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:25:51.541 * Looking for test storage... 00:25:51.541 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # lcov --version 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:51.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.541 --rc genhtml_branch_coverage=1 00:25:51.541 --rc genhtml_function_coverage=1 00:25:51.541 --rc genhtml_legend=1 00:25:51.541 --rc geninfo_all_blocks=1 00:25:51.541 --rc geninfo_unexecuted_blocks=1 00:25:51.541 00:25:51.541 ' 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:51.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.541 --rc genhtml_branch_coverage=1 00:25:51.541 --rc genhtml_function_coverage=1 00:25:51.541 --rc genhtml_legend=1 00:25:51.541 --rc geninfo_all_blocks=1 00:25:51.541 --rc geninfo_unexecuted_blocks=1 00:25:51.541 00:25:51.541 ' 00:25:51.541 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:51.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.542 --rc genhtml_branch_coverage=1 00:25:51.542 --rc genhtml_function_coverage=1 00:25:51.542 --rc genhtml_legend=1 00:25:51.542 --rc geninfo_all_blocks=1 00:25:51.542 --rc geninfo_unexecuted_blocks=1 00:25:51.542 00:25:51.542 ' 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:51.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.542 --rc genhtml_branch_coverage=1 00:25:51.542 --rc genhtml_function_coverage=1 00:25:51.542 --rc genhtml_legend=1 00:25:51.542 --rc geninfo_all_blocks=1 00:25:51.542 --rc geninfo_unexecuted_blocks=1 00:25:51.542 00:25:51.542 ' 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:51.542 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:25:51.542 19:20:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:58.117 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:58.117 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:58.117 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:58.117 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # is_hw=yes 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # rdma_device_init 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@530 -- # allocate_nic_ips 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:58.117 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:58.118 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:58.118 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:58.118 altname enp217s0f0np0 00:25:58.118 altname ens818f0np0 00:25:58.118 inet 192.168.100.8/24 scope global mlx_0_0 00:25:58.118 valid_lft forever preferred_lft forever 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:58.118 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:58.118 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:58.118 altname enp217s0f1np1 00:25:58.118 altname ens818f1np1 00:25:58.118 inet 192.168.100.9/24 scope global mlx_0_1 00:25:58.118 valid_lft forever preferred_lft forever 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # return 0 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:25:58.118 192.168.100.9' 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:25:58.118 192.168.100.9' 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # head -n 1 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:25:58.118 192.168.100.9' 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # tail -n +2 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # head -n 1 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@509 -- # nvmfpid=391712 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@510 -- # waitforlisten 391712 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # '[' -z 391712 ']' 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:58.118 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:58.118 [2024-12-13 19:20:32.447871] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:25:58.118 [2024-12-13 19:20:32.447923] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:58.378 [2024-12-13 19:20:32.542016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:58.378 [2024-12-13 19:20:32.564622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.378 [2024-12-13 19:20:32.564658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.378 [2024-12-13 19:20:32.564667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:58.378 [2024-12-13 19:20:32.564676] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:58.378 [2024-12-13 19:20:32.564682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.378 [2024-12-13 19:20:32.566481] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.378 [2024-12-13 19:20:32.566500] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:25:58.378 [2024-12-13 19:20:32.566595] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.378 [2024-12-13 19:20:32.566597] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:25:58.378 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:58.378 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@868 -- # return 0 00:25:58.378 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:58.378 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:58.378 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:58.378 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.378 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:25:58.378 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.378 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:58.378 [2024-12-13 19:20:32.742513] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x818540/0x81c9f0) succeed. 00:25:58.378 [2024-12-13 19:20:32.751812] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x819b80/0x85e090) succeed. 00:25:58.637 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.637 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:25:58.637 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:58.637 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:25:58.637 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.637 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:58.637 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.637 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:58.637 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.637 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:58.637 Malloc0 00:25:58.637 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.637 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:25:58.637 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.637 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:58.637 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.638 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:25:58.638 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.638 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:58.638 [2024-12-13 19:20:32.857678] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:58.638 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.638 19:20:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:25:59.575 19:20:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:25:59.575 19:20:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:25:59.575 19:20:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:59.575 19:20:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:25:59.575 19:20:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:59.575 19:20:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:25:59.575 19:20:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:25:59.575 19:20:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:25:59.575 19:20:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:59.575 19:20:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.575 19:20:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:59.575 19:20:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.575 19:20:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:59.575 19:20:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.575 19:20:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:59.575 Malloc1 00:25:59.575 19:20:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.575 19:20:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:59.575 19:20:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.575 19:20:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:59.575 19:20:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.575 19:20:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:59.575 19:20:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.575 19:20:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:25:59.575 19:20:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.575 19:20:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:26:00.952 19:20:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:26:00.952 19:20:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:26:00.952 19:20:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:00.952 19:20:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme1n1 00:26:00.952 19:20:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme1n1 00:26:00.952 19:20:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:00.952 19:20:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:26:00.952 19:20:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:26:00.952 19:20:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:00.952 19:20:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.952 19:20:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:00.952 19:20:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.952 19:20:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:00.952 19:20:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.952 19:20:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:00.952 Malloc2 00:26:00.952 19:20:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.952 19:20:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:00.952 19:20:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.952 19:20:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:00.952 19:20:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.952 19:20:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:26:00.952 19:20:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.952 19:20:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:00.952 19:20:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.952 19:20:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:26:01.888 19:20:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:26:01.888 19:20:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:26:01.888 19:20:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:01.888 19:20:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme2n1 00:26:01.888 19:20:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:01.888 19:20:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme2n1 00:26:01.888 19:20:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:26:01.888 19:20:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:26:01.888 19:20:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:26:01.888 19:20:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.888 19:20:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:01.888 19:20:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.888 19:20:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:01.888 19:20:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.888 19:20:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:01.888 Malloc3 00:26:01.888 19:20:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.888 19:20:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:01.888 19:20:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.888 19:20:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:01.888 19:20:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.888 19:20:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:26:01.888 19:20:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.888 19:20:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:01.888 19:20:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.888 19:20:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:26:02.825 19:20:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:26:02.825 19:20:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:26:02.825 19:20:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:02.825 19:20:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme3n1 00:26:02.825 19:20:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:02.825 19:20:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme3n1 00:26:02.825 19:20:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:26:02.825 19:20:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:26:02.825 19:20:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:26:02.825 19:20:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.825 19:20:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:02.825 19:20:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.825 19:20:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:02.825 19:20:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.825 19:20:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:02.825 Malloc4 00:26:02.825 19:20:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.825 19:20:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:02.825 19:20:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.825 19:20:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:02.825 19:20:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.825 19:20:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:26:02.825 19:20:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.825 19:20:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:02.825 19:20:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.825 19:20:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:26:03.762 19:20:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:26:03.762 19:20:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:26:03.762 19:20:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:03.762 19:20:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme4n1 00:26:03.762 19:20:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme4n1 00:26:03.762 19:20:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:03.762 19:20:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:26:03.762 19:20:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:26:03.762 19:20:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:26:03.762 19:20:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.762 19:20:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:03.762 19:20:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.762 19:20:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:03.762 19:20:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.762 19:20:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:03.762 Malloc5 00:26:03.762 19:20:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.762 19:20:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:03.762 19:20:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.762 19:20:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:03.762 19:20:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.762 19:20:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:26:03.762 19:20:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.762 19:20:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:03.762 19:20:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.762 19:20:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:26:05.163 19:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:26:05.163 19:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:26:05.163 19:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:05.163 19:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme5n1 00:26:05.163 19:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme5n1 00:26:05.163 19:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:05.163 19:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:26:05.163 19:20:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:26:05.163 [global] 00:26:05.163 thread=1 00:26:05.163 invalidate=1 00:26:05.163 rw=read 00:26:05.163 time_based=1 00:26:05.163 runtime=10 00:26:05.163 ioengine=libaio 00:26:05.163 direct=1 00:26:05.163 bs=1048576 00:26:05.163 iodepth=128 00:26:05.163 norandommap=1 00:26:05.163 numjobs=13 00:26:05.163 00:26:05.163 [job0] 00:26:05.163 filename=/dev/nvme0n1 00:26:05.163 [job1] 00:26:05.163 filename=/dev/nvme1n1 00:26:05.163 [job2] 00:26:05.163 filename=/dev/nvme2n1 00:26:05.163 [job3] 00:26:05.163 filename=/dev/nvme3n1 00:26:05.163 [job4] 00:26:05.163 filename=/dev/nvme4n1 00:26:05.163 [job5] 00:26:05.163 filename=/dev/nvme5n1 00:26:05.163 Could not set queue depth (nvme0n1) 00:26:05.163 Could not set queue depth (nvme1n1) 00:26:05.163 Could not set queue depth (nvme2n1) 00:26:05.163 Could not set queue depth (nvme3n1) 00:26:05.163 Could not set queue depth (nvme4n1) 00:26:05.163 Could not set queue depth (nvme5n1) 00:26:05.428 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:26:05.428 ... 00:26:05.428 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:26:05.428 ... 00:26:05.428 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:26:05.428 ... 00:26:05.428 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:26:05.428 ... 00:26:05.428 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:26:05.428 ... 00:26:05.428 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:26:05.428 ... 00:26:05.428 fio-3.35 00:26:05.428 Starting 78 threads 00:26:17.642 00:26:17.642 job0: (groupid=0, jobs=1): err= 0: pid=393094: Fri Dec 13 19:20:50 2024 00:26:17.642 read: IOPS=55, BW=55.5MiB/s (58.2MB/s)(562MiB/10127msec) 00:26:17.642 slat (usec): min=44, max=2087.1k, avg=17821.62, stdev=145724.56 00:26:17.642 clat (msec): min=107, max=6893, avg=1554.87, stdev=1857.56 00:26:17.642 lat (msec): min=146, max=6909, avg=1572.69, stdev=1869.76 00:26:17.642 clat percentiles (msec): 00:26:17.642 | 1.00th=[ 194], 5.00th=[ 489], 10.00th=[ 651], 20.00th=[ 659], 00:26:17.642 | 30.00th=[ 659], 40.00th=[ 693], 50.00th=[ 768], 60.00th=[ 902], 00:26:17.642 | 70.00th=[ 1217], 80.00th=[ 1485], 90.00th=[ 5000], 95.00th=[ 6879], 00:26:17.642 | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 6879], 99.95th=[ 6879], 00:26:17.642 | 99.99th=[ 6879] 00:26:17.642 bw ( KiB/s): min=75776, max=192512, per=3.35%, avg=126976.00, stdev=60684.42, samples=7 00:26:17.642 iops : min= 74, max= 188, avg=124.00, stdev=59.26, samples=7 00:26:17.642 lat (msec) : 250=1.78%, 500=3.38%, 750=42.35%, 1000=15.48%, 2000=23.49% 00:26:17.642 lat (msec) : >=2000=13.52% 00:26:17.642 cpu : usr=0.01%, sys=1.89%, ctx=605, majf=0, minf=32769 00:26:17.642 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.7%, >=64=88.8% 00:26:17.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.642 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:26:17.642 issued rwts: total=562,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.642 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.642 job0: (groupid=0, jobs=1): err= 0: pid=393095: Fri Dec 13 19:20:50 2024 00:26:17.642 read: IOPS=3, BW=3663KiB/s (3751kB/s)(38.0MiB/10624msec) 00:26:17.642 slat (usec): min=677, max=2120.2k, avg=277614.47, stdev=698920.88 00:26:17.642 clat (msec): min=74, max=10619, avg=8966.62, stdev=2885.85 00:26:17.642 lat (msec): min=2131, max=10623, avg=9244.23, stdev=2487.15 00:26:17.642 clat percentiles (msec): 00:26:17.642 | 1.00th=[ 74], 5.00th=[ 2140], 10.00th=[ 4279], 20.00th=[ 6477], 00:26:17.642 | 30.00th=[10402], 40.00th=[10402], 50.00th=[10402], 60.00th=[10537], 00:26:17.642 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:26:17.642 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:26:17.642 | 99.99th=[10671] 00:26:17.642 lat (msec) : 100=2.63%, >=2000=97.37% 00:26:17.642 cpu : usr=0.00%, sys=0.33%, ctx=73, majf=0, minf=9729 00:26:17.642 IO depths : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0% 00:26:17.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.642 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:17.642 issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.642 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.642 job0: (groupid=0, jobs=1): err= 0: pid=393096: Fri Dec 13 19:20:50 2024 00:26:17.642 read: IOPS=6, BW=6911KiB/s (7077kB/s)(72.0MiB/10668msec) 00:26:17.642 slat (usec): min=823, max=3335.4k, avg=147359.15, stdev=565837.13 00:26:17.642 clat (msec): min=57, max=10664, avg=9074.62, stdev=2723.17 00:26:17.642 lat (msec): min=2120, max=10667, avg=9221.98, stdev=2506.81 00:26:17.642 clat percentiles (msec): 00:26:17.642 | 1.00th=[ 57], 5.00th=[ 2165], 10.00th=[ 4279], 20.00th=[ 9866], 00:26:17.642 | 30.00th=[10000], 40.00th=[10134], 50.00th=[10134], 60.00th=[10268], 00:26:17.642 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10537], 95.00th=[10671], 00:26:17.642 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:26:17.642 | 99.99th=[10671] 00:26:17.642 lat (msec) : 100=1.39%, >=2000=98.61% 00:26:17.642 cpu : usr=0.00%, sys=0.50%, ctx=242, majf=0, minf=18433 00:26:17.642 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.1%, 16=22.2%, 32=44.4%, >=64=12.5% 00:26:17.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.642 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:26:17.642 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.642 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.642 job0: (groupid=0, jobs=1): err= 0: pid=393097: Fri Dec 13 19:20:50 2024 00:26:17.642 read: IOPS=3, BW=3579KiB/s (3665kB/s)(37.0MiB/10586msec) 00:26:17.642 slat (usec): min=937, max=2086.0k, avg=284546.78, stdev=704316.02 00:26:17.642 clat (msec): min=57, max=10506, avg=5325.31, stdev=2952.94 00:26:17.642 lat (msec): min=2137, max=10585, avg=5609.85, stdev=2938.43 00:26:17.642 clat percentiles (msec): 00:26:17.642 | 1.00th=[ 58], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 2198], 00:26:17.642 | 30.00th=[ 4279], 40.00th=[ 4329], 50.00th=[ 4329], 60.00th=[ 6409], 00:26:17.642 | 70.00th=[ 6477], 80.00th=[ 8557], 90.00th=[10537], 95.00th=[10537], 00:26:17.642 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:26:17.642 | 99.99th=[10537] 00:26:17.642 lat (msec) : 100=2.70%, >=2000=97.30% 00:26:17.642 cpu : usr=0.00%, sys=0.34%, ctx=47, majf=0, minf=9473 00:26:17.642 IO depths : 1=2.7%, 2=5.4%, 4=10.8%, 8=21.6%, 16=43.2%, 32=16.2%, >=64=0.0% 00:26:17.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.642 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:17.642 issued rwts: total=37,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.642 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.642 job0: (groupid=0, jobs=1): err= 0: pid=393098: Fri Dec 13 19:20:50 2024 00:26:17.642 read: IOPS=12, BW=12.8MiB/s (13.4MB/s)(136MiB/10624msec) 00:26:17.642 slat (usec): min=59, max=2106.0k, avg=73566.74, stdev=345140.68 00:26:17.642 clat (msec): min=618, max=10602, avg=3148.38, stdev=3469.42 00:26:17.642 lat (msec): min=624, max=10605, avg=3221.94, stdev=3521.05 00:26:17.642 clat percentiles (msec): 00:26:17.642 | 1.00th=[ 625], 5.00th=[ 667], 10.00th=[ 693], 20.00th=[ 927], 00:26:17.642 | 30.00th=[ 1150], 40.00th=[ 1368], 50.00th=[ 1586], 60.00th=[ 1804], 00:26:17.642 | 70.00th=[ 2039], 80.00th=[ 6409], 90.00th=[10000], 95.00th=[10537], 00:26:17.642 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:26:17.642 | 99.99th=[10537] 00:26:17.642 bw ( KiB/s): min=17825, max=17825, per=0.47%, avg=17825.00, stdev= 0.00, samples=1 00:26:17.642 iops : min= 17, max= 17, avg=17.00, stdev= 0.00, samples=1 00:26:17.642 lat (msec) : 750=11.03%, 1000=11.76%, 2000=46.32%, >=2000=30.88% 00:26:17.642 cpu : usr=0.01%, sys=1.14%, ctx=238, majf=0, minf=32769 00:26:17.642 IO depths : 1=0.7%, 2=1.5%, 4=2.9%, 8=5.9%, 16=11.8%, 32=23.5%, >=64=53.7% 00:26:17.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.642 complete : 0=0.0%, 4=90.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=10.0% 00:26:17.642 issued rwts: total=136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.642 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.642 job0: (groupid=0, jobs=1): err= 0: pid=393099: Fri Dec 13 19:20:50 2024 00:26:17.642 read: IOPS=31, BW=31.4MiB/s (33.0MB/s)(338MiB/10755msec) 00:26:17.642 slat (usec): min=416, max=2100.0k, avg=31652.69, stdev=226334.85 00:26:17.642 clat (msec): min=54, max=9259, avg=3838.97, stdev=3831.69 00:26:17.642 lat (msec): min=673, max=9267, avg=3870.63, stdev=3834.23 00:26:17.642 clat percentiles (msec): 00:26:17.642 | 1.00th=[ 667], 5.00th=[ 676], 10.00th=[ 684], 20.00th=[ 701], 00:26:17.642 | 30.00th=[ 718], 40.00th=[ 810], 50.00th=[ 927], 60.00th=[ 2836], 00:26:17.642 | 70.00th=[ 8792], 80.00th=[ 8926], 90.00th=[ 9060], 95.00th=[ 9194], 00:26:17.642 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:26:17.642 | 99.99th=[ 9194] 00:26:17.642 bw ( KiB/s): min= 8192, max=192512, per=1.89%, avg=71680.00, stdev=88647.89, samples=6 00:26:17.642 iops : min= 8, max= 188, avg=70.00, stdev=86.57, samples=6 00:26:17.642 lat (msec) : 100=0.30%, 750=34.62%, 1000=15.98%, 2000=7.69%, >=2000=41.42% 00:26:17.642 cpu : usr=0.01%, sys=1.17%, ctx=640, majf=0, minf=32769 00:26:17.642 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.7%, 32=9.5%, >=64=81.4% 00:26:17.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.642 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:26:17.642 issued rwts: total=338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.642 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.642 job0: (groupid=0, jobs=1): err= 0: pid=393100: Fri Dec 13 19:20:50 2024 00:26:17.642 read: IOPS=218, BW=218MiB/s (229MB/s)(2334MiB/10699msec) 00:26:17.642 slat (usec): min=42, max=2042.1k, avg=4548.75, stdev=42393.50 00:26:17.642 clat (msec): min=65, max=2558, avg=567.99, stdev=443.26 00:26:17.642 lat (msec): min=369, max=2559, avg=572.53, stdev=444.38 00:26:17.642 clat percentiles (msec): 00:26:17.642 | 1.00th=[ 372], 5.00th=[ 372], 10.00th=[ 376], 20.00th=[ 376], 00:26:17.642 | 30.00th=[ 384], 40.00th=[ 418], 50.00th=[ 451], 60.00th=[ 485], 00:26:17.642 | 70.00th=[ 502], 80.00th=[ 617], 90.00th=[ 651], 95.00th=[ 2232], 00:26:17.642 | 99.00th=[ 2500], 99.50th=[ 2534], 99.90th=[ 2567], 99.95th=[ 2567], 00:26:17.642 | 99.99th=[ 2567] 00:26:17.642 bw ( KiB/s): min=10240, max=346112, per=7.01%, avg=265758.12, stdev=85910.88, samples=17 00:26:17.642 iops : min= 10, max= 338, avg=259.53, stdev=83.90, samples=17 00:26:17.642 lat (msec) : 100=0.04%, 500=67.31%, 750=27.21%, >=2000=5.44% 00:26:17.642 cpu : usr=0.07%, sys=3.60%, ctx=2073, majf=0, minf=32769 00:26:17.642 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:26:17.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:17.642 issued rwts: total=2334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.642 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.642 job0: (groupid=0, jobs=1): err= 0: pid=393101: Fri Dec 13 19:20:50 2024 00:26:17.642 read: IOPS=140, BW=141MiB/s (147MB/s)(1508MiB/10732msec) 00:26:17.642 slat (usec): min=57, max=2066.9k, avg=7057.45, stdev=72570.80 00:26:17.642 clat (msec): min=82, max=4535, avg=867.33, stdev=871.02 00:26:17.642 lat (msec): min=431, max=4535, avg=874.39, stdev=874.98 00:26:17.642 clat percentiles (msec): 00:26:17.642 | 1.00th=[ 443], 5.00th=[ 485], 10.00th=[ 489], 20.00th=[ 498], 00:26:17.642 | 30.00th=[ 506], 40.00th=[ 535], 50.00th=[ 558], 60.00th=[ 600], 00:26:17.642 | 70.00th=[ 625], 80.00th=[ 659], 90.00th=[ 2123], 95.00th=[ 2668], 00:26:17.642 | 99.00th=[ 4463], 99.50th=[ 4530], 99.90th=[ 4530], 99.95th=[ 4530], 00:26:17.642 | 99.99th=[ 4530] 00:26:17.642 bw ( KiB/s): min=88064, max=274432, per=5.73%, avg=217403.08, stdev=52579.82, samples=13 00:26:17.642 iops : min= 86, max= 268, avg=212.31, stdev=51.35, samples=13 00:26:17.642 lat (msec) : 100=0.07%, 500=25.27%, 750=60.88%, 1000=0.46%, 2000=1.06% 00:26:17.643 lat (msec) : >=2000=12.27% 00:26:17.643 cpu : usr=0.05%, sys=2.35%, ctx=1267, majf=0, minf=32769 00:26:17.643 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:26:17.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.643 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:17.643 issued rwts: total=1508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.643 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.643 job0: (groupid=0, jobs=1): err= 0: pid=393102: Fri Dec 13 19:20:50 2024 00:26:17.643 read: IOPS=3, BW=4019KiB/s (4115kB/s)(42.0MiB/10702msec) 00:26:17.643 slat (usec): min=556, max=2117.4k, avg=253153.27, stdev=666910.72 00:26:17.643 clat (msec): min=69, max=10689, avg=8624.20, stdev=3216.19 00:26:17.643 lat (msec): min=2117, max=10701, avg=8877.36, stdev=2932.30 00:26:17.643 clat percentiles (msec): 00:26:17.643 | 1.00th=[ 69], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 6477], 00:26:17.643 | 30.00th=[ 8658], 40.00th=[10402], 50.00th=[10537], 60.00th=[10537], 00:26:17.643 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10671], 95.00th=[10671], 00:26:17.643 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:26:17.643 | 99.99th=[10671] 00:26:17.643 lat (msec) : 100=2.38%, >=2000=97.62% 00:26:17.643 cpu : usr=0.00%, sys=0.39%, ctx=80, majf=0, minf=10753 00:26:17.643 IO depths : 1=2.4%, 2=4.8%, 4=9.5%, 8=19.0%, 16=38.1%, 32=26.2%, >=64=0.0% 00:26:17.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.643 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:17.643 issued rwts: total=42,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.643 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.643 job0: (groupid=0, jobs=1): err= 0: pid=393103: Fri Dec 13 19:20:50 2024 00:26:17.643 read: IOPS=12, BW=12.9MiB/s (13.5MB/s)(137MiB/10652msec) 00:26:17.643 slat (usec): min=943, max=2118.2k, avg=77274.12, stdev=349839.29 00:26:17.643 clat (msec): min=63, max=10481, avg=8797.54, stdev=1912.15 00:26:17.643 lat (msec): min=2124, max=10513, avg=8874.82, stdev=1762.99 00:26:17.643 clat percentiles (msec): 00:26:17.643 | 1.00th=[ 2123], 5.00th=[ 4279], 10.00th=[ 6275], 20.00th=[ 8658], 00:26:17.643 | 30.00th=[ 8792], 40.00th=[ 9060], 50.00th=[ 9194], 60.00th=[ 9463], 00:26:17.643 | 70.00th=[ 9731], 80.00th=[10134], 90.00th=[10268], 95.00th=[10402], 00:26:17.643 | 99.00th=[10402], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:26:17.643 | 99.99th=[10537] 00:26:17.643 bw ( KiB/s): min= 2048, max= 8192, per=0.12%, avg=4608.00, stdev=2577.01, samples=4 00:26:17.643 iops : min= 2, max= 8, avg= 4.50, stdev= 2.52, samples=4 00:26:17.643 lat (msec) : 100=0.73%, >=2000=99.27% 00:26:17.643 cpu : usr=0.00%, sys=0.99%, ctx=297, majf=0, minf=32769 00:26:17.643 IO depths : 1=0.7%, 2=1.5%, 4=2.9%, 8=5.8%, 16=11.7%, 32=23.4%, >=64=54.0% 00:26:17.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.643 complete : 0=0.0%, 4=90.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=9.1% 00:26:17.643 issued rwts: total=137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.643 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.643 job0: (groupid=0, jobs=1): err= 0: pid=393104: Fri Dec 13 19:20:50 2024 00:26:17.643 read: IOPS=25, BW=25.5MiB/s (26.7MB/s)(272MiB/10679msec) 00:26:17.643 slat (usec): min=570, max=2125.5k, avg=39053.34, stdev=252955.20 00:26:17.643 clat (msec): min=54, max=9529, avg=4720.84, stdev=3985.42 00:26:17.643 lat (msec): min=874, max=9538, avg=4759.90, stdev=3982.27 00:26:17.643 clat percentiles (msec): 00:26:17.643 | 1.00th=[ 869], 5.00th=[ 885], 10.00th=[ 902], 20.00th=[ 911], 00:26:17.643 | 30.00th=[ 919], 40.00th=[ 936], 50.00th=[ 1053], 60.00th=[ 8792], 00:26:17.643 | 70.00th=[ 8926], 80.00th=[ 9060], 90.00th=[ 9329], 95.00th=[ 9463], 00:26:17.643 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[ 9597], 99.95th=[ 9597], 00:26:17.643 | 99.99th=[ 9597] 00:26:17.643 bw ( KiB/s): min= 4096, max=137216, per=1.11%, avg=42130.29, stdev=63574.45, samples=7 00:26:17.643 iops : min= 4, max= 134, avg=41.14, stdev=62.08, samples=7 00:26:17.643 lat (msec) : 100=0.37%, 1000=45.22%, 2000=4.41%, >=2000=50.00% 00:26:17.643 cpu : usr=0.02%, sys=1.02%, ctx=609, majf=0, minf=32769 00:26:17.643 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=2.9%, 16=5.9%, 32=11.8%, >=64=76.8% 00:26:17.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.643 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:26:17.643 issued rwts: total=272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.643 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.643 job0: (groupid=0, jobs=1): err= 0: pid=393105: Fri Dec 13 19:20:50 2024 00:26:17.643 read: IOPS=15, BW=15.9MiB/s (16.7MB/s)(169MiB/10597msec) 00:26:17.643 slat (usec): min=64, max=2110.6k, avg=62280.02, stdev=316202.60 00:26:17.643 clat (msec): min=70, max=10573, avg=7343.40, stdev=3164.60 00:26:17.643 lat (msec): min=1484, max=10583, avg=7405.68, stdev=3116.43 00:26:17.643 clat percentiles (msec): 00:26:17.643 | 1.00th=[ 1485], 5.00th=[ 1552], 10.00th=[ 1569], 20.00th=[ 3641], 00:26:17.643 | 30.00th=[ 6477], 40.00th=[ 8658], 50.00th=[ 9060], 60.00th=[ 9194], 00:26:17.643 | 70.00th=[ 9463], 80.00th=[ 9597], 90.00th=[ 9866], 95.00th=[10000], 00:26:17.643 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:26:17.643 | 99.99th=[10537] 00:26:17.643 bw ( KiB/s): min= 6144, max=47104, per=0.44%, avg=16793.60, stdev=17036.60, samples=5 00:26:17.643 iops : min= 6, max= 46, avg=16.40, stdev=16.64, samples=5 00:26:17.643 lat (msec) : 100=0.59%, 2000=16.57%, >=2000=82.84% 00:26:17.643 cpu : usr=0.00%, sys=0.83%, ctx=297, majf=0, minf=32769 00:26:17.643 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.7%, 16=9.5%, 32=18.9%, >=64=62.7% 00:26:17.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.643 complete : 0=0.0%, 4=97.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.3% 00:26:17.643 issued rwts: total=169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.643 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.643 job0: (groupid=0, jobs=1): err= 0: pid=393106: Fri Dec 13 19:20:50 2024 00:26:17.643 read: IOPS=4, BW=4327KiB/s (4431kB/s)(45.0MiB/10649msec) 00:26:17.643 slat (usec): min=536, max=2097.1k, avg=235046.10, stdev=647752.72 00:26:17.643 clat (msec): min=70, max=10524, avg=5658.24, stdev=3133.11 00:26:17.643 lat (msec): min=2145, max=10648, avg=5893.29, stdev=3101.00 00:26:17.643 clat percentiles (msec): 00:26:17.643 | 1.00th=[ 71], 5.00th=[ 2165], 10.00th=[ 2165], 20.00th=[ 2165], 00:26:17.643 | 30.00th=[ 4329], 40.00th=[ 4329], 50.00th=[ 4329], 60.00th=[ 6477], 00:26:17.643 | 70.00th=[ 6477], 80.00th=[ 8658], 90.00th=[10537], 95.00th=[10537], 00:26:17.643 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:26:17.643 | 99.99th=[10537] 00:26:17.643 lat (msec) : 100=2.22%, >=2000=97.78% 00:26:17.643 cpu : usr=0.00%, sys=0.40%, ctx=53, majf=0, minf=11521 00:26:17.643 IO depths : 1=2.2%, 2=4.4%, 4=8.9%, 8=17.8%, 16=35.6%, 32=31.1%, >=64=0.0% 00:26:17.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.643 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:17.643 issued rwts: total=45,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.643 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.643 job1: (groupid=0, jobs=1): err= 0: pid=393107: Fri Dec 13 19:20:50 2024 00:26:17.643 read: IOPS=50, BW=50.6MiB/s (53.0MB/s)(539MiB/10661msec) 00:26:17.643 slat (usec): min=42, max=2159.6k, avg=19648.94, stdev=180043.39 00:26:17.643 clat (msec): min=66, max=8976, avg=2395.38, stdev=3306.72 00:26:17.643 lat (msec): min=382, max=8978, avg=2415.03, stdev=3315.70 00:26:17.643 clat percentiles (msec): 00:26:17.643 | 1.00th=[ 380], 5.00th=[ 384], 10.00th=[ 384], 20.00th=[ 388], 00:26:17.643 | 30.00th=[ 388], 40.00th=[ 393], 50.00th=[ 409], 60.00th=[ 502], 00:26:17.643 | 70.00th=[ 802], 80.00th=[ 6477], 90.00th=[ 8792], 95.00th=[ 8926], 00:26:17.643 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:26:17.643 | 99.99th=[ 8926] 00:26:17.643 bw ( KiB/s): min= 4096, max=329728, per=3.17%, avg=120246.86, stdev=145493.14, samples=7 00:26:17.643 iops : min= 4, max= 322, avg=117.43, stdev=142.08, samples=7 00:26:17.643 lat (msec) : 100=0.19%, 500=59.93%, 750=8.91%, 1000=1.48%, >=2000=29.50% 00:26:17.643 cpu : usr=0.05%, sys=1.22%, ctx=543, majf=0, minf=32769 00:26:17.643 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=5.9%, >=64=88.3% 00:26:17.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.643 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:26:17.643 issued rwts: total=539,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.643 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.643 job1: (groupid=0, jobs=1): err= 0: pid=393108: Fri Dec 13 19:20:50 2024 00:26:17.643 read: IOPS=22, BW=22.4MiB/s (23.5MB/s)(227MiB/10115msec) 00:26:17.643 slat (usec): min=611, max=2103.2k, avg=44255.27, stdev=227399.54 00:26:17.643 clat (msec): min=66, max=7945, avg=2432.02, stdev=2027.42 00:26:17.643 lat (msec): min=127, max=7957, avg=2476.27, stdev=2064.24 00:26:17.643 clat percentiles (msec): 00:26:17.643 | 1.00th=[ 148], 5.00th=[ 288], 10.00th=[ 510], 20.00th=[ 1167], 00:26:17.643 | 30.00th=[ 1737], 40.00th=[ 1888], 50.00th=[ 2022], 60.00th=[ 2198], 00:26:17.643 | 70.00th=[ 2333], 80.00th=[ 2467], 90.00th=[ 7819], 95.00th=[ 7886], 00:26:17.643 | 99.00th=[ 7953], 99.50th=[ 7953], 99.90th=[ 7953], 99.95th=[ 7953], 00:26:17.643 | 99.99th=[ 7953] 00:26:17.643 bw ( KiB/s): min=20480, max=69632, per=1.05%, avg=39846.20, stdev=20387.83, samples=5 00:26:17.643 iops : min= 20, max= 68, avg=38.80, stdev=19.83, samples=5 00:26:17.643 lat (msec) : 100=0.44%, 250=3.08%, 500=5.73%, 750=5.29%, 1000=2.20% 00:26:17.643 lat (msec) : 2000=29.96%, >=2000=53.30% 00:26:17.643 cpu : usr=0.00%, sys=1.16%, ctx=628, majf=0, minf=32769 00:26:17.643 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.5%, 16=7.0%, 32=14.1%, >=64=72.2% 00:26:17.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.643 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:26:17.643 issued rwts: total=227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.643 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.643 job1: (groupid=0, jobs=1): err= 0: pid=393109: Fri Dec 13 19:20:50 2024 00:26:17.643 read: IOPS=19, BW=19.3MiB/s (20.3MB/s)(208MiB/10755msec) 00:26:17.643 slat (usec): min=977, max=2130.8k, avg=51375.97, stdev=252176.87 00:26:17.643 clat (msec): min=66, max=8420, avg=3939.52, stdev=2472.07 00:26:17.643 lat (msec): min=1171, max=8501, avg=3990.90, stdev=2476.23 00:26:17.643 clat percentiles (msec): 00:26:17.643 | 1.00th=[ 1234], 5.00th=[ 1385], 10.00th=[ 1620], 20.00th=[ 2299], 00:26:17.643 | 30.00th=[ 2433], 40.00th=[ 2534], 50.00th=[ 2869], 60.00th=[ 3171], 00:26:17.643 | 70.00th=[ 3507], 80.00th=[ 7886], 90.00th=[ 8221], 95.00th=[ 8356], 00:26:17.643 | 99.00th=[ 8356], 99.50th=[ 8423], 99.90th=[ 8423], 99.95th=[ 8423], 00:26:17.643 | 99.99th=[ 8423] 00:26:17.643 bw ( KiB/s): min= 4096, max=61440, per=0.86%, avg=32736.60, stdev=22374.18, samples=5 00:26:17.644 iops : min= 4, max= 60, avg=31.60, stdev=21.73, samples=5 00:26:17.644 lat (msec) : 100=0.48%, 2000=14.42%, >=2000=85.10% 00:26:17.644 cpu : usr=0.00%, sys=1.27%, ctx=605, majf=0, minf=32769 00:26:17.644 IO depths : 1=0.5%, 2=1.0%, 4=1.9%, 8=3.8%, 16=7.7%, 32=15.4%, >=64=69.7% 00:26:17.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.644 complete : 0=0.0%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.2% 00:26:17.644 issued rwts: total=208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.644 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.644 job1: (groupid=0, jobs=1): err= 0: pid=393110: Fri Dec 13 19:20:50 2024 00:26:17.644 read: IOPS=10, BW=10.2MiB/s (10.7MB/s)(109MiB/10649msec) 00:26:17.644 slat (usec): min=413, max=4299.3k, avg=97553.94, stdev=554716.04 00:26:17.644 clat (msec): min=14, max=10645, avg=9570.59, stdev=2272.04 00:26:17.644 lat (msec): min=1785, max=10648, avg=9668.14, stdev=2077.93 00:26:17.644 clat percentiles (msec): 00:26:17.644 | 1.00th=[ 1787], 5.00th=[ 2005], 10.00th=[ 9866], 20.00th=[10000], 00:26:17.644 | 30.00th=[10000], 40.00th=[10134], 50.00th=[10134], 60.00th=[10268], 00:26:17.644 | 70.00th=[10268], 80.00th=[10402], 90.00th=[10671], 95.00th=[10671], 00:26:17.644 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:26:17.644 | 99.99th=[10671] 00:26:17.644 lat (msec) : 20=0.92%, 2000=3.67%, >=2000=95.41% 00:26:17.644 cpu : usr=0.00%, sys=0.73%, ctx=235, majf=0, minf=27905 00:26:17.644 IO depths : 1=0.9%, 2=1.8%, 4=3.7%, 8=7.3%, 16=14.7%, 32=29.4%, >=64=42.2% 00:26:17.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.644 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:26:17.644 issued rwts: total=109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.644 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.644 job1: (groupid=0, jobs=1): err= 0: pid=393111: Fri Dec 13 19:20:50 2024 00:26:17.644 read: IOPS=5, BW=5452KiB/s (5583kB/s)(57.0MiB/10706msec) 00:26:17.644 slat (usec): min=970, max=2107.8k, avg=186446.07, stdev=574021.04 00:26:17.644 clat (msec): min=77, max=10703, avg=9407.65, stdev=2546.16 00:26:17.644 lat (msec): min=2145, max=10705, avg=9594.10, stdev=2218.85 00:26:17.644 clat percentiles (msec): 00:26:17.644 | 1.00th=[ 79], 5.00th=[ 4212], 10.00th=[ 4329], 20.00th=[ 8658], 00:26:17.644 | 30.00th=[10537], 40.00th=[10671], 50.00th=[10671], 60.00th=[10671], 00:26:17.644 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:26:17.644 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:26:17.644 | 99.99th=[10671] 00:26:17.644 lat (msec) : 100=1.75%, >=2000=98.25% 00:26:17.644 cpu : usr=0.00%, sys=0.56%, ctx=116, majf=0, minf=14593 00:26:17.644 IO depths : 1=1.8%, 2=3.5%, 4=7.0%, 8=14.0%, 16=28.1%, 32=45.6%, >=64=0.0% 00:26:17.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.644 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:17.644 issued rwts: total=57,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.644 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.644 job1: (groupid=0, jobs=1): err= 0: pid=393112: Fri Dec 13 19:20:50 2024 00:26:17.644 read: IOPS=20, BW=20.9MiB/s (21.9MB/s)(211MiB/10107msec) 00:26:17.644 slat (usec): min=643, max=2132.9k, avg=47577.75, stdev=240565.69 00:26:17.644 clat (msec): min=66, max=8919, avg=2008.49, stdev=1947.73 00:26:17.644 lat (msec): min=123, max=8936, avg=2056.06, stdev=2004.88 00:26:17.644 clat percentiles (msec): 00:26:17.644 | 1.00th=[ 126], 5.00th=[ 174], 10.00th=[ 351], 20.00th=[ 584], 00:26:17.644 | 30.00th=[ 869], 40.00th=[ 1301], 50.00th=[ 1854], 60.00th=[ 2232], 00:26:17.644 | 70.00th=[ 2366], 80.00th=[ 2534], 90.00th=[ 2702], 95.00th=[ 8792], 00:26:17.644 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:26:17.644 | 99.99th=[ 8926] 00:26:17.644 bw ( KiB/s): min=14336, max=82548, per=1.46%, avg=55505.33, stdev=36233.84, samples=3 00:26:17.644 iops : min= 14, max= 80, avg=54.00, stdev=35.16, samples=3 00:26:17.644 lat (msec) : 100=0.47%, 250=6.64%, 500=10.43%, 750=9.00%, 1000=7.58% 00:26:17.644 lat (msec) : 2000=18.48%, >=2000=47.39% 00:26:17.644 cpu : usr=0.00%, sys=1.05%, ctx=636, majf=0, minf=32769 00:26:17.644 IO depths : 1=0.5%, 2=0.9%, 4=1.9%, 8=3.8%, 16=7.6%, 32=15.2%, >=64=70.1% 00:26:17.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.644 complete : 0=0.0%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.2% 00:26:17.644 issued rwts: total=211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.644 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.644 job1: (groupid=0, jobs=1): err= 0: pid=393113: Fri Dec 13 19:20:50 2024 00:26:17.644 read: IOPS=2, BW=2696KiB/s (2761kB/s)(28.0MiB/10634msec) 00:26:17.644 slat (usec): min=1003, max=2151.7k, avg=376703.79, stdev=797796.44 00:26:17.644 clat (msec): min=86, max=10631, avg=8055.13, stdev=3345.67 00:26:17.644 lat (msec): min=2145, max=10633, avg=8431.83, stdev=2990.09 00:26:17.644 clat percentiles (msec): 00:26:17.644 | 1.00th=[ 87], 5.00th=[ 2140], 10.00th=[ 2198], 20.00th=[ 4329], 00:26:17.644 | 30.00th=[ 6477], 40.00th=[ 8658], 50.00th=[10537], 60.00th=[10537], 00:26:17.644 | 70.00th=[10537], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:26:17.644 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:26:17.644 | 99.99th=[10671] 00:26:17.644 lat (msec) : 100=3.57%, >=2000=96.43% 00:26:17.644 cpu : usr=0.00%, sys=0.20%, ctx=71, majf=0, minf=7169 00:26:17.644 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:26:17.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.644 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:17.644 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.644 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.644 job1: (groupid=0, jobs=1): err= 0: pid=393114: Fri Dec 13 19:20:50 2024 00:26:17.644 read: IOPS=4, BW=4778KiB/s (4893kB/s)(50.0MiB/10715msec) 00:26:17.644 slat (usec): min=1454, max=2158.5k, avg=212722.64, stdev=594988.10 00:26:17.644 clat (msec): min=78, max=10712, avg=5987.41, stdev=4152.92 00:26:17.644 lat (msec): min=1757, max=10714, avg=6200.13, stdev=4116.30 00:26:17.644 clat percentiles (msec): 00:26:17.644 | 1.00th=[ 79], 5.00th=[ 1821], 10.00th=[ 1838], 20.00th=[ 1938], 00:26:17.644 | 30.00th=[ 2056], 40.00th=[ 2140], 50.00th=[ 4245], 60.00th=[ 8658], 00:26:17.644 | 70.00th=[10537], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:26:17.644 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:26:17.644 | 99.99th=[10671] 00:26:17.644 lat (msec) : 100=2.00%, 2000=24.00%, >=2000=74.00% 00:26:17.644 cpu : usr=0.00%, sys=0.40%, ctx=161, majf=0, minf=12801 00:26:17.644 IO depths : 1=2.0%, 2=4.0%, 4=8.0%, 8=16.0%, 16=32.0%, 32=38.0%, >=64=0.0% 00:26:17.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.644 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:17.644 issued rwts: total=50,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.644 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.644 job1: (groupid=0, jobs=1): err= 0: pid=393115: Fri Dec 13 19:20:50 2024 00:26:17.644 read: IOPS=4, BW=4987KiB/s (5106kB/s)(52.0MiB/10678msec) 00:26:17.644 slat (usec): min=943, max=2104.1k, avg=203793.38, stdev=606049.97 00:26:17.644 clat (msec): min=80, max=10676, avg=6635.08, stdev=3616.24 00:26:17.644 lat (msec): min=2114, max=10677, avg=6838.88, stdev=3537.34 00:26:17.644 clat percentiles (msec): 00:26:17.644 | 1.00th=[ 81], 5.00th=[ 2123], 10.00th=[ 2123], 20.00th=[ 2165], 00:26:17.644 | 30.00th=[ 4279], 40.00th=[ 4329], 50.00th=[ 6409], 60.00th=[10402], 00:26:17.644 | 70.00th=[10537], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:26:17.644 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:26:17.644 | 99.99th=[10671] 00:26:17.644 lat (msec) : 100=1.92%, >=2000=98.08% 00:26:17.644 cpu : usr=0.00%, sys=0.54%, ctx=84, majf=0, minf=13313 00:26:17.644 IO depths : 1=1.9%, 2=3.8%, 4=7.7%, 8=15.4%, 16=30.8%, 32=40.4%, >=64=0.0% 00:26:17.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.644 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:17.644 issued rwts: total=52,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.644 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.644 job1: (groupid=0, jobs=1): err= 0: pid=393116: Fri Dec 13 19:20:50 2024 00:26:17.644 read: IOPS=5, BW=5337KiB/s (5465kB/s)(56.0MiB/10744msec) 00:26:17.644 slat (usec): min=983, max=2137.3k, avg=190366.89, stdev=581087.05 00:26:17.644 clat (msec): min=82, max=10740, avg=9198.71, stdev=2925.09 00:26:17.644 lat (msec): min=2127, max=10743, avg=9389.08, stdev=2655.48 00:26:17.644 clat percentiles (msec): 00:26:17.644 | 1.00th=[ 83], 5.00th=[ 2165], 10.00th=[ 4212], 20.00th=[ 8557], 00:26:17.644 | 30.00th=[10537], 40.00th=[10671], 50.00th=[10671], 60.00th=[10671], 00:26:17.644 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:26:17.644 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:26:17.644 | 99.99th=[10805] 00:26:17.644 lat (msec) : 100=1.79%, >=2000=98.21% 00:26:17.644 cpu : usr=0.00%, sys=0.58%, ctx=116, majf=0, minf=14337 00:26:17.644 IO depths : 1=1.8%, 2=3.6%, 4=7.1%, 8=14.3%, 16=28.6%, 32=44.6%, >=64=0.0% 00:26:17.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.644 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:17.644 issued rwts: total=56,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.644 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.644 job1: (groupid=0, jobs=1): err= 0: pid=393117: Fri Dec 13 19:20:50 2024 00:26:17.644 read: IOPS=217, BW=217MiB/s (228MB/s)(2184MiB/10056msec) 00:26:17.644 slat (usec): min=41, max=64674, avg=4573.80, stdev=7728.43 00:26:17.644 clat (msec): min=53, max=993, avg=564.97, stdev=166.64 00:26:17.644 lat (msec): min=55, max=1001, avg=569.54, stdev=167.55 00:26:17.644 clat percentiles (msec): 00:26:17.644 | 1.00th=[ 197], 5.00th=[ 388], 10.00th=[ 393], 20.00th=[ 401], 00:26:17.644 | 30.00th=[ 443], 40.00th=[ 518], 50.00th=[ 535], 60.00th=[ 567], 00:26:17.644 | 70.00th=[ 651], 80.00th=[ 684], 90.00th=[ 818], 95.00th=[ 919], 00:26:17.644 | 99.00th=[ 969], 99.50th=[ 969], 99.90th=[ 986], 99.95th=[ 995], 00:26:17.644 | 99.99th=[ 995] 00:26:17.644 bw ( KiB/s): min=135168, max=327680, per=5.84%, avg=221543.58, stdev=62318.01, samples=19 00:26:17.644 iops : min= 132, max= 320, avg=216.32, stdev=60.91, samples=19 00:26:17.644 lat (msec) : 100=0.60%, 250=0.69%, 500=33.97%, 750=50.00%, 1000=14.74% 00:26:17.644 cpu : usr=0.16%, sys=2.93%, ctx=2000, majf=0, minf=32769 00:26:17.644 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:26:17.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:17.644 issued rwts: total=2184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.644 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.644 job1: (groupid=0, jobs=1): err= 0: pid=393118: Fri Dec 13 19:20:50 2024 00:26:17.644 read: IOPS=2, BW=2118KiB/s (2169kB/s)(22.0MiB/10638msec) 00:26:17.645 slat (msec): min=4, max=2097, avg=480.33, stdev=867.03 00:26:17.645 clat (msec): min=69, max=10617, avg=7194.67, stdev=3017.96 00:26:17.645 lat (msec): min=2124, max=10636, avg=7675.00, stdev=2648.25 00:26:17.645 clat percentiles (msec): 00:26:17.645 | 1.00th=[ 70], 5.00th=[ 2123], 10.00th=[ 4212], 20.00th=[ 4329], 00:26:17.645 | 30.00th=[ 6477], 40.00th=[ 6477], 50.00th=[ 6477], 60.00th=[ 8658], 00:26:17.645 | 70.00th=[ 8658], 80.00th=[10537], 90.00th=[10537], 95.00th=[10671], 00:26:17.645 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:26:17.645 | 99.99th=[10671] 00:26:17.645 lat (msec) : 100=4.55%, >=2000=95.45% 00:26:17.645 cpu : usr=0.00%, sys=0.15%, ctx=83, majf=0, minf=5633 00:26:17.645 IO depths : 1=4.5%, 2=9.1%, 4=18.2%, 8=36.4%, 16=31.8%, 32=0.0%, >=64=0.0% 00:26:17.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.645 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:17.645 issued rwts: total=22,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.645 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.645 job1: (groupid=0, jobs=1): err= 0: pid=393119: Fri Dec 13 19:20:50 2024 00:26:17.645 read: IOPS=2, BW=2415KiB/s (2473kB/s)(25.0MiB/10600msec) 00:26:17.645 slat (usec): min=594, max=4195.0k, avg=423162.64, stdev=1028615.37 00:26:17.645 clat (msec): min=20, max=10592, avg=8494.83, stdev=3094.70 00:26:17.645 lat (msec): min=4215, max=10599, avg=8918.00, stdev=2565.72 00:26:17.645 clat percentiles (msec): 00:26:17.645 | 1.00th=[ 21], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 4245], 00:26:17.645 | 30.00th=[ 8557], 40.00th=[10402], 50.00th=[10402], 60.00th=[10402], 00:26:17.645 | 70.00th=[10402], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:26:17.645 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:26:17.645 | 99.99th=[10537] 00:26:17.645 lat (msec) : 50=4.00%, >=2000=96.00% 00:26:17.645 cpu : usr=0.00%, sys=0.18%, ctx=80, majf=0, minf=6401 00:26:17.645 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:26:17.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.645 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:17.645 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.645 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.645 job2: (groupid=0, jobs=1): err= 0: pid=393120: Fri Dec 13 19:20:50 2024 00:26:17.645 read: IOPS=11, BW=12.0MiB/s (12.6MB/s)(127MiB/10590msec) 00:26:17.645 slat (usec): min=412, max=2101.3k, avg=79149.89, stdev=355293.46 00:26:17.645 clat (msec): min=537, max=10588, avg=2682.46, stdev=2984.61 00:26:17.645 lat (msec): min=606, max=10589, avg=2761.61, stdev=3059.62 00:26:17.645 clat percentiles (msec): 00:26:17.645 | 1.00th=[ 609], 5.00th=[ 625], 10.00th=[ 726], 20.00th=[ 944], 00:26:17.645 | 30.00th=[ 1099], 40.00th=[ 1318], 50.00th=[ 1536], 60.00th=[ 1770], 00:26:17.645 | 70.00th=[ 1972], 80.00th=[ 2165], 90.00th=[ 8557], 95.00th=[10537], 00:26:17.645 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:26:17.645 | 99.99th=[10537] 00:26:17.645 lat (msec) : 750=13.39%, 1000=12.60%, 2000=48.03%, >=2000=25.98% 00:26:17.645 cpu : usr=0.00%, sys=0.70%, ctx=274, majf=0, minf=32513 00:26:17.645 IO depths : 1=0.8%, 2=1.6%, 4=3.1%, 8=6.3%, 16=12.6%, 32=25.2%, >=64=50.4% 00:26:17.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.645 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:26:17.645 issued rwts: total=127,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.645 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.645 job2: (groupid=0, jobs=1): err= 0: pid=393121: Fri Dec 13 19:20:50 2024 00:26:17.645 read: IOPS=64, BW=64.3MiB/s (67.4MB/s)(681MiB/10592msec) 00:26:17.645 slat (usec): min=466, max=2103.8k, avg=15406.99, stdev=131635.10 00:26:17.645 clat (msec): min=95, max=4792, avg=1181.78, stdev=743.27 00:26:17.645 lat (msec): min=623, max=4890, avg=1197.19, stdev=763.58 00:26:17.645 clat percentiles (msec): 00:26:17.645 | 1.00th=[ 625], 5.00th=[ 634], 10.00th=[ 642], 20.00th=[ 693], 00:26:17.645 | 30.00th=[ 726], 40.00th=[ 768], 50.00th=[ 785], 60.00th=[ 969], 00:26:17.645 | 70.00th=[ 1183], 80.00th=[ 1267], 90.00th=[ 2534], 95.00th=[ 2735], 00:26:17.645 | 99.00th=[ 2903], 99.50th=[ 2903], 99.90th=[ 4799], 99.95th=[ 4799], 00:26:17.645 | 99.99th=[ 4799] 00:26:17.645 bw ( KiB/s): min=30720, max=206848, per=3.73%, avg=141568.00, stdev=58731.94, samples=8 00:26:17.645 iops : min= 30, max= 202, avg=138.25, stdev=57.36, samples=8 00:26:17.645 lat (msec) : 100=0.15%, 750=35.83%, 1000=26.58%, 2000=17.47%, >=2000=19.97% 00:26:17.645 cpu : usr=0.02%, sys=1.17%, ctx=1327, majf=0, minf=32769 00:26:17.645 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.7%, >=64=90.7% 00:26:17.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.645 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:26:17.645 issued rwts: total=681,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.645 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.645 job2: (groupid=0, jobs=1): err= 0: pid=393122: Fri Dec 13 19:20:50 2024 00:26:17.645 read: IOPS=20, BW=20.9MiB/s (21.9MB/s)(223MiB/10685msec) 00:26:17.645 slat (usec): min=84, max=2120.6k, avg=47559.77, stdev=281080.03 00:26:17.645 clat (msec): min=76, max=10155, avg=5874.08, stdev=4303.34 00:26:17.645 lat (msec): min=522, max=10159, avg=5921.64, stdev=4292.52 00:26:17.645 clat percentiles (msec): 00:26:17.645 | 1.00th=[ 523], 5.00th=[ 523], 10.00th=[ 527], 20.00th=[ 542], 00:26:17.645 | 30.00th=[ 651], 40.00th=[ 3809], 50.00th=[ 8020], 60.00th=[ 9731], 00:26:17.645 | 70.00th=[ 9866], 80.00th=[10000], 90.00th=[10134], 95.00th=[10134], 00:26:17.645 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:26:17.645 | 99.99th=[10134] 00:26:17.645 bw ( KiB/s): min= 2000, max=153600, per=0.86%, avg=32760.00, stdev=59756.02, samples=6 00:26:17.645 iops : min= 1, max= 150, avg=31.83, stdev=58.45, samples=6 00:26:17.645 lat (msec) : 100=0.45%, 750=33.63%, 2000=1.35%, >=2000=64.57% 00:26:17.645 cpu : usr=0.01%, sys=1.33%, ctx=246, majf=0, minf=32769 00:26:17.645 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.2%, 32=14.3%, >=64=71.7% 00:26:17.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.645 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:26:17.645 issued rwts: total=223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.645 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.645 job2: (groupid=0, jobs=1): err= 0: pid=393123: Fri Dec 13 19:20:50 2024 00:26:17.645 read: IOPS=113, BW=113MiB/s (119MB/s)(1212MiB/10724msec) 00:26:17.645 slat (usec): min=40, max=2039.1k, avg=8774.77, stdev=85429.11 00:26:17.645 clat (msec): min=79, max=4516, avg=1011.49, stdev=1183.37 00:26:17.645 lat (msec): min=390, max=4520, avg=1020.27, stdev=1189.05 00:26:17.645 clat percentiles (msec): 00:26:17.645 | 1.00th=[ 393], 5.00th=[ 393], 10.00th=[ 397], 20.00th=[ 401], 00:26:17.645 | 30.00th=[ 447], 40.00th=[ 518], 50.00th=[ 523], 60.00th=[ 558], 00:26:17.645 | 70.00th=[ 634], 80.00th=[ 735], 90.00th=[ 3809], 95.00th=[ 4279], 00:26:17.645 | 99.00th=[ 4463], 99.50th=[ 4530], 99.90th=[ 4530], 99.95th=[ 4530], 00:26:17.645 | 99.99th=[ 4530] 00:26:17.645 bw ( KiB/s): min= 4096, max=329728, per=5.32%, avg=201775.36, stdev=107960.43, samples=11 00:26:17.645 iops : min= 4, max= 322, avg=197.00, stdev=105.41, samples=11 00:26:17.645 lat (msec) : 100=0.08%, 500=32.84%, 750=48.60%, 1000=1.73%, 2000=0.08% 00:26:17.645 lat (msec) : >=2000=16.67% 00:26:17.645 cpu : usr=0.10%, sys=2.28%, ctx=1126, majf=0, minf=32769 00:26:17.645 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.8% 00:26:17.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.645 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:17.645 issued rwts: total=1212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.645 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.645 job2: (groupid=0, jobs=1): err= 0: pid=393124: Fri Dec 13 19:20:50 2024 00:26:17.645 read: IOPS=3, BW=3768KiB/s (3859kB/s)(39.0MiB/10598msec) 00:26:17.645 slat (usec): min=934, max=2095.6k, avg=269789.03, stdev=682994.85 00:26:17.645 clat (msec): min=75, max=10593, avg=7679.73, stdev=3359.62 00:26:17.645 lat (msec): min=2107, max=10597, avg=7949.52, stdev=3148.74 00:26:17.645 clat percentiles (msec): 00:26:17.645 | 1.00th=[ 75], 5.00th=[ 2106], 10.00th=[ 2165], 20.00th=[ 4245], 00:26:17.645 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[ 8658], 60.00th=[10402], 00:26:17.645 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:26:17.645 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:26:17.645 | 99.99th=[10537] 00:26:17.645 lat (msec) : 100=2.56%, >=2000=97.44% 00:26:17.645 cpu : usr=0.00%, sys=0.33%, ctx=75, majf=0, minf=9985 00:26:17.645 IO depths : 1=2.6%, 2=5.1%, 4=10.3%, 8=20.5%, 16=41.0%, 32=20.5%, >=64=0.0% 00:26:17.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.645 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:17.645 issued rwts: total=39,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.645 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.645 job2: (groupid=0, jobs=1): err= 0: pid=393125: Fri Dec 13 19:20:50 2024 00:26:17.645 read: IOPS=163, BW=164MiB/s (172MB/s)(1731MiB/10578msec) 00:26:17.645 slat (usec): min=37, max=2002.0k, avg=6041.74, stdev=63538.49 00:26:17.645 clat (msec): min=112, max=4715, avg=628.05, stdev=551.47 00:26:17.645 lat (msec): min=391, max=6437, avg=634.09, stdev=561.93 00:26:17.645 clat percentiles (msec): 00:26:17.645 | 1.00th=[ 393], 5.00th=[ 393], 10.00th=[ 397], 20.00th=[ 401], 00:26:17.645 | 30.00th=[ 405], 40.00th=[ 414], 50.00th=[ 451], 60.00th=[ 502], 00:26:17.645 | 70.00th=[ 535], 80.00th=[ 575], 90.00th=[ 609], 95.00th=[ 2366], 00:26:17.645 | 99.00th=[ 2567], 99.50th=[ 2601], 99.90th=[ 4665], 99.95th=[ 4732], 00:26:17.645 | 99.99th=[ 4732] 00:26:17.645 bw ( KiB/s): min=59273, max=325632, per=6.66%, avg=252425.31, stdev=73188.79, samples=13 00:26:17.645 iops : min= 57, max= 318, avg=246.38, stdev=71.61, samples=13 00:26:17.645 lat (msec) : 250=0.06%, 500=59.04%, 750=32.76%, >=2000=8.15% 00:26:17.645 cpu : usr=0.04%, sys=1.60%, ctx=1662, majf=0, minf=32769 00:26:17.645 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:26:17.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.645 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:17.645 issued rwts: total=1731,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.645 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.645 job2: (groupid=0, jobs=1): err= 0: pid=393126: Fri Dec 13 19:20:50 2024 00:26:17.645 read: IOPS=4, BW=4788KiB/s (4903kB/s)(50.0MiB/10693msec) 00:26:17.645 slat (usec): min=653, max=2100.1k, avg=212342.44, stdev=614625.98 00:26:17.645 clat (msec): min=75, max=10677, avg=7176.36, stdev=3424.58 00:26:17.645 lat (msec): min=2135, max=10692, avg=7388.70, stdev=3302.28 00:26:17.645 clat percentiles (msec): 00:26:17.645 | 1.00th=[ 75], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 2198], 00:26:17.645 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[ 6544], 60.00th=[ 8658], 00:26:17.646 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10671], 95.00th=[10671], 00:26:17.646 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:26:17.646 | 99.99th=[10671] 00:26:17.646 lat (msec) : 100=2.00%, >=2000=98.00% 00:26:17.646 cpu : usr=0.01%, sys=0.47%, ctx=76, majf=0, minf=12801 00:26:17.646 IO depths : 1=2.0%, 2=4.0%, 4=8.0%, 8=16.0%, 16=32.0%, 32=38.0%, >=64=0.0% 00:26:17.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.646 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:17.646 issued rwts: total=50,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.646 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.646 job2: (groupid=0, jobs=1): err= 0: pid=393127: Fri Dec 13 19:20:50 2024 00:26:17.646 read: IOPS=4, BW=4598KiB/s (4709kB/s)(48.0MiB/10689msec) 00:26:17.646 slat (usec): min=507, max=2103.3k, avg=221070.27, stdev=626637.46 00:26:17.646 clat (msec): min=77, max=10686, avg=8843.10, stdev=2916.20 00:26:17.646 lat (msec): min=2130, max=10688, avg=9064.17, stdev=2625.22 00:26:17.646 clat percentiles (msec): 00:26:17.646 | 1.00th=[ 78], 5.00th=[ 2165], 10.00th=[ 4279], 20.00th=[ 6409], 00:26:17.646 | 30.00th=[ 8658], 40.00th=[10402], 50.00th=[10537], 60.00th=[10537], 00:26:17.646 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:26:17.646 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:26:17.646 | 99.99th=[10671] 00:26:17.646 lat (msec) : 100=2.08%, >=2000=97.92% 00:26:17.646 cpu : usr=0.00%, sys=0.40%, ctx=93, majf=0, minf=12289 00:26:17.646 IO depths : 1=2.1%, 2=4.2%, 4=8.3%, 8=16.7%, 16=33.3%, 32=35.4%, >=64=0.0% 00:26:17.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.646 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:17.646 issued rwts: total=48,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.646 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.646 job2: (groupid=0, jobs=1): err= 0: pid=393128: Fri Dec 13 19:20:50 2024 00:26:17.646 read: IOPS=25, BW=25.7MiB/s (27.0MB/s)(273MiB/10610msec) 00:26:17.646 slat (usec): min=41, max=2096.1k, avg=38528.42, stdev=241236.09 00:26:17.646 clat (msec): min=89, max=10450, avg=4615.31, stdev=3616.21 00:26:17.646 lat (msec): min=886, max=10503, avg=4653.84, stdev=3613.44 00:26:17.646 clat percentiles (msec): 00:26:17.646 | 1.00th=[ 885], 5.00th=[ 902], 10.00th=[ 911], 20.00th=[ 919], 00:26:17.646 | 30.00th=[ 936], 40.00th=[ 961], 50.00th=[ 4178], 60.00th=[ 5336], 00:26:17.646 | 70.00th=[ 8658], 80.00th=[ 8926], 90.00th=[ 9060], 95.00th=[ 9194], 00:26:17.646 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[10402], 99.95th=[10402], 00:26:17.646 | 99.99th=[10402] 00:26:17.646 bw ( KiB/s): min= 6131, max=143360, per=1.12%, avg=42421.00, stdev=53559.89, samples=7 00:26:17.646 iops : min= 5, max= 140, avg=41.29, stdev=52.42, samples=7 00:26:17.646 lat (msec) : 100=0.37%, 1000=41.03%, 2000=1.83%, >=2000=56.78% 00:26:17.646 cpu : usr=0.04%, sys=0.86%, ctx=329, majf=0, minf=32769 00:26:17.646 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=2.9%, 16=5.9%, 32=11.7%, >=64=76.9% 00:26:17.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.646 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:26:17.646 issued rwts: total=273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.646 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.646 job2: (groupid=0, jobs=1): err= 0: pid=393129: Fri Dec 13 19:20:50 2024 00:26:17.646 read: IOPS=152, BW=153MiB/s (160MB/s)(1628MiB/10652msec) 00:26:17.646 slat (usec): min=40, max=2024.1k, avg=6486.84, stdev=50530.49 00:26:17.646 clat (msec): min=83, max=3079, avg=801.53, stdev=561.66 00:26:17.646 lat (msec): min=369, max=3105, avg=808.02, stdev=563.28 00:26:17.646 clat percentiles (msec): 00:26:17.646 | 1.00th=[ 372], 5.00th=[ 376], 10.00th=[ 414], 20.00th=[ 498], 00:26:17.646 | 30.00th=[ 535], 40.00th=[ 625], 50.00th=[ 651], 60.00th=[ 709], 00:26:17.646 | 70.00th=[ 785], 80.00th=[ 877], 90.00th=[ 953], 95.00th=[ 2500], 00:26:17.646 | 99.00th=[ 2970], 99.50th=[ 3037], 99.90th=[ 3071], 99.95th=[ 3071], 00:26:17.646 | 99.99th=[ 3071] 00:26:17.646 bw ( KiB/s): min= 2000, max=344064, per=4.50%, avg=170787.83, stdev=84179.60, samples=18 00:26:17.646 iops : min= 1, max= 336, avg=166.67, stdev=82.32, samples=18 00:26:17.646 lat (msec) : 100=0.06%, 500=22.24%, 750=45.21%, 1000=24.63%, 2000=0.06% 00:26:17.646 lat (msec) : >=2000=7.80% 00:26:17.646 cpu : usr=0.06%, sys=2.48%, ctx=2171, majf=0, minf=32769 00:26:17.646 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:26:17.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.646 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:17.646 issued rwts: total=1628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.646 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.646 job2: (groupid=0, jobs=1): err= 0: pid=393130: Fri Dec 13 19:20:50 2024 00:26:17.646 read: IOPS=29, BW=29.1MiB/s (30.5MB/s)(309MiB/10613msec) 00:26:17.646 slat (usec): min=71, max=2090.0k, avg=34027.20, stdev=233253.53 00:26:17.646 clat (msec): min=95, max=9279, avg=4153.94, stdev=3617.21 00:26:17.646 lat (msec): min=839, max=9287, avg=4187.96, stdev=3618.43 00:26:17.646 clat percentiles (msec): 00:26:17.646 | 1.00th=[ 844], 5.00th=[ 860], 10.00th=[ 869], 20.00th=[ 885], 00:26:17.646 | 30.00th=[ 919], 40.00th=[ 936], 50.00th=[ 2165], 60.00th=[ 5067], 00:26:17.646 | 70.00th=[ 8658], 80.00th=[ 8792], 90.00th=[ 9060], 95.00th=[ 9194], 00:26:17.646 | 99.00th=[ 9194], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:26:17.646 | 99.99th=[ 9329] 00:26:17.646 bw ( KiB/s): min=14336, max=141312, per=1.40%, avg=52950.71, stdev=57071.71, samples=7 00:26:17.646 iops : min= 14, max= 138, avg=51.57, stdev=55.84, samples=7 00:26:17.646 lat (msec) : 100=0.32%, 1000=49.19%, >=2000=50.49% 00:26:17.646 cpu : usr=0.04%, sys=1.41%, ctx=275, majf=0, minf=32769 00:26:17.646 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.4%, >=64=79.6% 00:26:17.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.646 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:26:17.646 issued rwts: total=309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.646 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.646 job2: (groupid=0, jobs=1): err= 0: pid=393131: Fri Dec 13 19:20:50 2024 00:26:17.646 read: IOPS=15, BW=15.2MiB/s (15.9MB/s)(153MiB/10071msec) 00:26:17.646 slat (usec): min=485, max=2100.9k, avg=65588.24, stdev=312716.71 00:26:17.646 clat (msec): min=35, max=9711, avg=2263.13, stdev=2760.15 00:26:17.646 lat (msec): min=104, max=9718, avg=2328.72, stdev=2822.43 00:26:17.646 clat percentiles (msec): 00:26:17.646 | 1.00th=[ 105], 5.00th=[ 218], 10.00th=[ 334], 20.00th=[ 481], 00:26:17.646 | 30.00th=[ 693], 40.00th=[ 911], 50.00th=[ 1133], 60.00th=[ 1267], 00:26:17.646 | 70.00th=[ 1603], 80.00th=[ 3641], 90.00th=[ 7819], 95.00th=[ 9597], 00:26:17.646 | 99.00th=[ 9731], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:26:17.646 | 99.99th=[ 9731] 00:26:17.646 bw ( KiB/s): min=50519, max=50519, per=1.33%, avg=50519.00, stdev= 0.00, samples=1 00:26:17.646 iops : min= 49, max= 49, avg=49.00, stdev= 0.00, samples=1 00:26:17.646 lat (msec) : 50=0.65%, 250=8.50%, 500=11.11%, 750=11.76%, 1000=11.76% 00:26:17.646 lat (msec) : 2000=28.10%, >=2000=28.10% 00:26:17.646 cpu : usr=0.01%, sys=0.83%, ctx=295, majf=0, minf=32769 00:26:17.646 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=5.2%, 16=10.5%, 32=20.9%, >=64=58.8% 00:26:17.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.646 complete : 0=0.0%, 4=96.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.7% 00:26:17.646 issued rwts: total=153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.646 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.646 job2: (groupid=0, jobs=1): err= 0: pid=393132: Fri Dec 13 19:20:50 2024 00:26:17.646 read: IOPS=5, BW=5827KiB/s (5967kB/s)(61.0MiB/10720msec) 00:26:17.646 slat (usec): min=934, max=2108.0k, avg=174463.02, stdev=563298.45 00:26:17.646 clat (msec): min=76, max=10718, avg=9147.70, stdev=2955.74 00:26:17.646 lat (msec): min=2111, max=10719, avg=9322.16, stdev=2715.76 00:26:17.646 clat percentiles (msec): 00:26:17.646 | 1.00th=[ 78], 5.00th=[ 2140], 10.00th=[ 4279], 20.00th=[ 6409], 00:26:17.646 | 30.00th=[10537], 40.00th=[10537], 50.00th=[10671], 60.00th=[10671], 00:26:17.646 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:26:17.646 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:26:17.646 | 99.99th=[10671] 00:26:17.646 lat (msec) : 100=1.64%, >=2000=98.36% 00:26:17.646 cpu : usr=0.01%, sys=0.58%, ctx=114, majf=0, minf=15617 00:26:17.646 IO depths : 1=1.6%, 2=3.3%, 4=6.6%, 8=13.1%, 16=26.2%, 32=49.2%, >=64=0.0% 00:26:17.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.646 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:17.646 issued rwts: total=61,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.646 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.646 job3: (groupid=0, jobs=1): err= 0: pid=393133: Fri Dec 13 19:20:50 2024 00:26:17.646 read: IOPS=42, BW=42.7MiB/s (44.8MB/s)(458MiB/10723msec) 00:26:17.646 slat (usec): min=55, max=2100.8k, avg=23216.83, stdev=192972.80 00:26:17.646 clat (msec): min=85, max=9162, avg=2909.48, stdev=3570.27 00:26:17.646 lat (msec): min=548, max=9167, avg=2932.70, stdev=3577.73 00:26:17.646 clat percentiles (msec): 00:26:17.647 | 1.00th=[ 550], 5.00th=[ 617], 10.00th=[ 617], 20.00th=[ 625], 00:26:17.647 | 30.00th=[ 625], 40.00th=[ 634], 50.00th=[ 642], 60.00th=[ 659], 00:26:17.647 | 70.00th=[ 2165], 80.00th=[ 8658], 90.00th=[ 8926], 95.00th=[ 9060], 00:26:17.647 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:26:17.647 | 99.99th=[ 9194] 00:26:17.647 bw ( KiB/s): min= 8192, max=208896, per=2.55%, avg=96548.57, stdev=97507.38, samples=7 00:26:17.647 iops : min= 8, max= 204, avg=94.29, stdev=95.22, samples=7 00:26:17.647 lat (msec) : 100=0.22%, 750=68.78%, >=2000=31.00% 00:26:17.647 cpu : usr=0.03%, sys=1.73%, ctx=417, majf=0, minf=32331 00:26:17.647 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=7.0%, >=64=86.2% 00:26:17.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.647 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:26:17.647 issued rwts: total=458,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.647 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.647 job3: (groupid=0, jobs=1): err= 0: pid=393134: Fri Dec 13 19:20:50 2024 00:26:17.647 read: IOPS=4, BW=4688KiB/s (4800kB/s)(49.0MiB/10704msec) 00:26:17.647 slat (usec): min=1360, max=2078.6k, avg=216183.45, stdev=614891.09 00:26:17.647 clat (msec): min=110, max=10699, avg=7685.33, stdev=3332.51 00:26:17.647 lat (msec): min=2140, max=10703, avg=7901.51, stdev=3170.51 00:26:17.647 clat percentiles (msec): 00:26:17.647 | 1.00th=[ 110], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 4329], 00:26:17.647 | 30.00th=[ 6409], 40.00th=[ 6544], 50.00th=[ 8658], 60.00th=[10537], 00:26:17.647 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:26:17.647 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:26:17.647 | 99.99th=[10671] 00:26:17.647 lat (msec) : 250=2.04%, >=2000=97.96% 00:26:17.647 cpu : usr=0.00%, sys=0.56%, ctx=88, majf=0, minf=12545 00:26:17.647 IO depths : 1=2.0%, 2=4.1%, 4=8.2%, 8=16.3%, 16=32.7%, 32=36.7%, >=64=0.0% 00:26:17.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.647 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:17.647 issued rwts: total=49,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.647 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.647 job3: (groupid=0, jobs=1): err= 0: pid=393135: Fri Dec 13 19:20:50 2024 00:26:17.647 read: IOPS=1, BW=1631KiB/s (1670kB/s)(17.0MiB/10673msec) 00:26:17.647 slat (msec): min=8, max=2135, avg=622.92, stdev=964.75 00:26:17.647 clat (msec): min=82, max=10626, avg=6658.17, stdev=3853.63 00:26:17.647 lat (msec): min=2144, max=10672, avg=7281.10, stdev=3569.75 00:26:17.647 clat percentiles (msec): 00:26:17.647 | 1.00th=[ 83], 5.00th=[ 83], 10.00th=[ 2140], 20.00th=[ 2198], 00:26:17.647 | 30.00th=[ 4329], 40.00th=[ 4329], 50.00th=[ 6477], 60.00th=[10537], 00:26:17.647 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10671], 95.00th=[10671], 00:26:17.647 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:26:17.647 | 99.99th=[10671] 00:26:17.647 lat (msec) : 100=5.88%, >=2000=94.12% 00:26:17.647 cpu : usr=0.00%, sys=0.12%, ctx=74, majf=0, minf=4353 00:26:17.647 IO depths : 1=5.9%, 2=11.8%, 4=23.5%, 8=47.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:26:17.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.647 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:17.647 issued rwts: total=17,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.647 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.647 job3: (groupid=0, jobs=1): err= 0: pid=393136: Fri Dec 13 19:20:50 2024 00:26:17.647 read: IOPS=2, BW=2590KiB/s (2652kB/s)(27.0MiB/10676msec) 00:26:17.647 slat (usec): min=676, max=2124.2k, avg=392199.15, stdev=808938.84 00:26:17.647 clat (msec): min=85, max=10674, avg=7833.81, stdev=3512.01 00:26:17.647 lat (msec): min=2150, max=10675, avg=8226.01, stdev=3189.98 00:26:17.647 clat percentiles (msec): 00:26:17.647 | 1.00th=[ 86], 5.00th=[ 2165], 10.00th=[ 2165], 20.00th=[ 4279], 00:26:17.647 | 30.00th=[ 6477], 40.00th=[ 8658], 50.00th=[ 8658], 60.00th=[10537], 00:26:17.647 | 70.00th=[10537], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:26:17.647 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:26:17.647 | 99.99th=[10671] 00:26:17.647 lat (msec) : 100=3.70%, >=2000=96.30% 00:26:17.647 cpu : usr=0.00%, sys=0.18%, ctx=71, majf=0, minf=6913 00:26:17.647 IO depths : 1=3.7%, 2=7.4%, 4=14.8%, 8=29.6%, 16=44.4%, 32=0.0%, >=64=0.0% 00:26:17.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.647 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:17.647 issued rwts: total=27,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.647 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.647 job3: (groupid=0, jobs=1): err= 0: pid=393137: Fri Dec 13 19:20:50 2024 00:26:17.647 read: IOPS=5, BW=5557KiB/s (5690kB/s)(58.0MiB/10688msec) 00:26:17.647 slat (usec): min=761, max=2095.0k, avg=182776.88, stdev=572828.13 00:26:17.647 clat (msec): min=86, max=10683, avg=9044.43, stdev=2856.55 00:26:17.647 lat (msec): min=2115, max=10687, avg=9227.21, stdev=2601.05 00:26:17.647 clat percentiles (msec): 00:26:17.647 | 1.00th=[ 87], 5.00th=[ 2123], 10.00th=[ 4279], 20.00th=[ 6477], 00:26:17.647 | 30.00th=[10402], 40.00th=[10537], 50.00th=[10537], 60.00th=[10537], 00:26:17.647 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:26:17.647 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:26:17.647 | 99.99th=[10671] 00:26:17.647 lat (msec) : 100=1.72%, >=2000=98.28% 00:26:17.647 cpu : usr=0.00%, sys=0.52%, ctx=96, majf=0, minf=14849 00:26:17.647 IO depths : 1=1.7%, 2=3.4%, 4=6.9%, 8=13.8%, 16=27.6%, 32=46.6%, >=64=0.0% 00:26:17.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.647 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:17.647 issued rwts: total=58,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.647 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.647 job3: (groupid=0, jobs=1): err= 0: pid=393138: Fri Dec 13 19:20:50 2024 00:26:17.647 read: IOPS=3, BW=3440KiB/s (3523kB/s)(36.0MiB/10715msec) 00:26:17.647 slat (usec): min=1396, max=2105.7k, avg=294392.80, stdev=708485.75 00:26:17.647 clat (msec): min=116, max=10712, avg=7547.69, stdev=3706.01 00:26:17.647 lat (msec): min=2127, max=10714, avg=7842.09, stdev=3514.83 00:26:17.647 clat percentiles (msec): 00:26:17.647 | 1.00th=[ 116], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 4279], 00:26:17.647 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[10537], 60.00th=[10537], 00:26:17.647 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:26:17.647 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:26:17.647 | 99.99th=[10671] 00:26:17.647 lat (msec) : 250=2.78%, >=2000=97.22% 00:26:17.647 cpu : usr=0.00%, sys=0.34%, ctx=86, majf=0, minf=9217 00:26:17.647 IO depths : 1=2.8%, 2=5.6%, 4=11.1%, 8=22.2%, 16=44.4%, 32=13.9%, >=64=0.0% 00:26:17.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.647 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:17.647 issued rwts: total=36,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.647 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.647 job3: (groupid=0, jobs=1): err= 0: pid=393139: Fri Dec 13 19:20:50 2024 00:26:17.647 read: IOPS=31, BW=31.4MiB/s (33.0MB/s)(316MiB/10051msec) 00:26:17.647 slat (usec): min=45, max=2079.7k, avg=31689.53, stdev=216475.23 00:26:17.647 clat (msec): min=35, max=8655, avg=2011.96, stdev=2363.45 00:26:17.647 lat (msec): min=109, max=8657, avg=2043.65, stdev=2391.43 00:26:17.647 clat percentiles (msec): 00:26:17.647 | 1.00th=[ 112], 5.00th=[ 148], 10.00th=[ 264], 20.00th=[ 558], 00:26:17.647 | 30.00th=[ 793], 40.00th=[ 919], 50.00th=[ 927], 60.00th=[ 995], 00:26:17.647 | 70.00th=[ 2702], 80.00th=[ 2735], 90.00th=[ 6812], 95.00th=[ 8557], 00:26:17.647 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:26:17.647 | 99.99th=[ 8658] 00:26:17.647 bw ( KiB/s): min=112640, max=145408, per=3.38%, avg=128180.67, stdev=16448.98, samples=3 00:26:17.647 iops : min= 110, max= 142, avg=125.00, stdev=16.09, samples=3 00:26:17.647 lat (msec) : 50=0.32%, 250=8.54%, 500=11.08%, 750=9.49%, 1000=36.08% 00:26:17.647 lat (msec) : 2000=2.53%, >=2000=31.96% 00:26:17.647 cpu : usr=0.01%, sys=1.29%, ctx=287, majf=0, minf=32769 00:26:17.647 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.5%, 16=5.1%, 32=10.1%, >=64=80.1% 00:26:17.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.647 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:26:17.647 issued rwts: total=316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.647 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.647 job3: (groupid=0, jobs=1): err= 0: pid=393140: Fri Dec 13 19:20:50 2024 00:26:17.647 read: IOPS=15, BW=15.3MiB/s (16.1MB/s)(164MiB/10692msec) 00:26:17.647 slat (usec): min=596, max=2095.4k, avg=64664.22, stdev=320580.49 00:26:17.647 clat (msec): min=85, max=8630, avg=6463.95, stdev=2545.13 00:26:17.647 lat (msec): min=983, max=10478, avg=6528.61, stdev=2509.26 00:26:17.647 clat percentiles (msec): 00:26:17.647 | 1.00th=[ 978], 5.00th=[ 995], 10.00th=[ 1955], 20.00th=[ 2970], 00:26:17.647 | 30.00th=[ 7483], 40.00th=[ 7684], 50.00th=[ 7752], 60.00th=[ 7886], 00:26:17.647 | 70.00th=[ 7953], 80.00th=[ 8087], 90.00th=[ 8288], 95.00th=[ 8356], 00:26:17.647 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:26:17.647 | 99.99th=[ 8658] 00:26:17.647 bw ( KiB/s): min= 2048, max=44966, per=0.32%, avg=12273.00, stdev=16348.00, samples=6 00:26:17.647 iops : min= 2, max= 43, avg=11.83, stdev=15.60, samples=6 00:26:17.647 lat (msec) : 100=0.61%, 1000=6.71%, 2000=4.88%, >=2000=87.80% 00:26:17.647 cpu : usr=0.00%, sys=0.87%, ctx=399, majf=0, minf=32769 00:26:17.647 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.9%, 16=9.8%, 32=19.5%, >=64=61.6% 00:26:17.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.647 complete : 0=0.0%, 4=97.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.6% 00:26:17.647 issued rwts: total=164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.647 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.647 job3: (groupid=0, jobs=1): err= 0: pid=393141: Fri Dec 13 19:20:50 2024 00:26:17.647 read: IOPS=2, BW=2894KiB/s (2964kB/s)(30.0MiB/10614msec) 00:26:17.647 slat (usec): min=892, max=2075.6k, avg=350218.99, stdev=760170.22 00:26:17.647 clat (msec): min=106, max=10547, avg=5931.70, stdev=3106.33 00:26:17.647 lat (msec): min=2140, max=10613, avg=6281.92, stdev=3018.00 00:26:17.647 clat percentiles (msec): 00:26:17.647 | 1.00th=[ 107], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 2198], 00:26:17.647 | 30.00th=[ 4279], 40.00th=[ 4329], 50.00th=[ 6409], 60.00th=[ 6477], 00:26:17.647 | 70.00th=[ 8557], 80.00th=[ 8658], 90.00th=[10537], 95.00th=[10537], 00:26:17.647 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:26:17.647 | 99.99th=[10537] 00:26:17.647 lat (msec) : 250=3.33%, >=2000=96.67% 00:26:17.647 cpu : usr=0.00%, sys=0.33%, ctx=62, majf=0, minf=7681 00:26:17.647 IO depths : 1=3.3%, 2=6.7%, 4=13.3%, 8=26.7%, 16=50.0%, 32=0.0%, >=64=0.0% 00:26:17.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.648 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:17.648 issued rwts: total=30,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.648 job3: (groupid=0, jobs=1): err= 0: pid=393142: Fri Dec 13 19:20:50 2024 00:26:17.648 read: IOPS=68, BW=68.5MiB/s (71.9MB/s)(725MiB/10580msec) 00:26:17.648 slat (usec): min=41, max=2099.3k, avg=14426.22, stdev=132695.30 00:26:17.648 clat (msec): min=116, max=6957, avg=1761.92, stdev=2218.41 00:26:17.648 lat (msec): min=478, max=6967, avg=1776.34, stdev=2223.80 00:26:17.648 clat percentiles (msec): 00:26:17.648 | 1.00th=[ 481], 5.00th=[ 493], 10.00th=[ 502], 20.00th=[ 531], 00:26:17.648 | 30.00th=[ 634], 40.00th=[ 701], 50.00th=[ 835], 60.00th=[ 877], 00:26:17.648 | 70.00th=[ 927], 80.00th=[ 1011], 90.00th=[ 6678], 95.00th=[ 6812], 00:26:17.648 | 99.00th=[ 6946], 99.50th=[ 6946], 99.90th=[ 6946], 99.95th=[ 6946], 00:26:17.648 | 99.99th=[ 6946] 00:26:17.648 bw ( KiB/s): min=14336, max=270336, per=3.22%, avg=122224.20, stdev=96113.24, samples=10 00:26:17.648 iops : min= 14, max= 264, avg=119.20, stdev=93.93, samples=10 00:26:17.648 lat (msec) : 250=0.14%, 500=9.66%, 750=34.34%, 1000=35.17%, 2000=1.24% 00:26:17.648 lat (msec) : >=2000=19.45% 00:26:17.648 cpu : usr=0.05%, sys=1.68%, ctx=792, majf=0, minf=32769 00:26:17.648 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.3% 00:26:17.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.648 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:26:17.648 issued rwts: total=725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.648 job3: (groupid=0, jobs=1): err= 0: pid=393143: Fri Dec 13 19:20:50 2024 00:26:17.648 read: IOPS=2, BW=2977KiB/s (3048kB/s)(31.0MiB/10664msec) 00:26:17.648 slat (usec): min=974, max=2132.2k, avg=341196.48, stdev=764398.30 00:26:17.648 clat (msec): min=86, max=10659, avg=7346.62, stdev=3733.38 00:26:17.648 lat (msec): min=2110, max=10663, avg=7687.82, stdev=3525.26 00:26:17.648 clat percentiles (msec): 00:26:17.648 | 1.00th=[ 87], 5.00th=[ 2106], 10.00th=[ 2140], 20.00th=[ 4245], 00:26:17.648 | 30.00th=[ 4329], 40.00th=[ 6409], 50.00th=[10537], 60.00th=[10537], 00:26:17.648 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:26:17.648 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:26:17.648 | 99.99th=[10671] 00:26:17.648 lat (msec) : 100=3.23%, >=2000=96.77% 00:26:17.648 cpu : usr=0.00%, sys=0.29%, ctx=89, majf=0, minf=7937 00:26:17.648 IO depths : 1=3.2%, 2=6.5%, 4=12.9%, 8=25.8%, 16=51.6%, 32=0.0%, >=64=0.0% 00:26:17.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.648 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:17.648 issued rwts: total=31,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.648 job3: (groupid=0, jobs=1): err= 0: pid=393144: Fri Dec 13 19:20:50 2024 00:26:17.648 read: IOPS=169, BW=169MiB/s (177MB/s)(1711MiB/10112msec) 00:26:17.648 slat (usec): min=40, max=2062.3k, avg=5857.56, stdev=50525.97 00:26:17.648 clat (msec): min=78, max=2814, avg=726.57, stdev=586.70 00:26:17.648 lat (msec): min=145, max=2815, avg=732.43, stdev=588.80 00:26:17.648 clat percentiles (msec): 00:26:17.648 | 1.00th=[ 259], 5.00th=[ 388], 10.00th=[ 393], 20.00th=[ 397], 00:26:17.648 | 30.00th=[ 409], 40.00th=[ 468], 50.00th=[ 535], 60.00th=[ 651], 00:26:17.648 | 70.00th=[ 701], 80.00th=[ 827], 90.00th=[ 953], 95.00th=[ 2668], 00:26:17.648 | 99.00th=[ 2802], 99.50th=[ 2802], 99.90th=[ 2802], 99.95th=[ 2802], 00:26:17.648 | 99.99th=[ 2802] 00:26:17.648 bw ( KiB/s): min=12288, max=329728, per=5.33%, avg=202121.44, stdev=89260.44, samples=16 00:26:17.648 iops : min= 12, max= 322, avg=197.38, stdev=87.18, samples=16 00:26:17.648 lat (msec) : 100=0.06%, 250=0.88%, 500=43.19%, 750=30.04%, 1000=18.06% 00:26:17.648 lat (msec) : 2000=0.35%, >=2000=7.42% 00:26:17.648 cpu : usr=0.12%, sys=2.74%, ctx=1509, majf=0, minf=32769 00:26:17.648 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:26:17.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.648 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:17.648 issued rwts: total=1711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.648 job3: (groupid=0, jobs=1): err= 0: pid=393145: Fri Dec 13 19:20:50 2024 00:26:17.648 read: IOPS=2, BW=2108KiB/s (2158kB/s)(22.0MiB/10688msec) 00:26:17.648 slat (usec): min=1107, max=2117.2k, avg=481904.24, stdev=884759.44 00:26:17.648 clat (msec): min=85, max=10681, avg=8732.47, stdev=3247.87 00:26:17.648 lat (msec): min=2180, max=10687, avg=9214.37, stdev=2631.94 00:26:17.648 clat percentiles (msec): 00:26:17.648 | 1.00th=[ 86], 5.00th=[ 2165], 10.00th=[ 4279], 20.00th=[ 6477], 00:26:17.648 | 30.00th=[ 8658], 40.00th=[10537], 50.00th=[10537], 60.00th=[10671], 00:26:17.648 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:26:17.648 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:26:17.648 | 99.99th=[10671] 00:26:17.648 lat (msec) : 100=4.55%, >=2000=95.45% 00:26:17.648 cpu : usr=0.01%, sys=0.21%, ctx=75, majf=0, minf=5633 00:26:17.648 IO depths : 1=4.5%, 2=9.1%, 4=18.2%, 8=36.4%, 16=31.8%, 32=0.0%, >=64=0.0% 00:26:17.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.648 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:17.648 issued rwts: total=22,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.648 job4: (groupid=0, jobs=1): err= 0: pid=393147: Fri Dec 13 19:20:50 2024 00:26:17.648 read: IOPS=141, BW=141MiB/s (148MB/s)(1506MiB/10667msec) 00:26:17.648 slat (usec): min=41, max=2102.3k, avg=6637.60, stdev=57270.21 00:26:17.648 clat (msec): min=388, max=3609, avg=852.83, stdev=842.27 00:26:17.648 lat (msec): min=389, max=4209, avg=859.47, stdev=847.47 00:26:17.648 clat percentiles (msec): 00:26:17.648 | 1.00th=[ 393], 5.00th=[ 393], 10.00th=[ 401], 20.00th=[ 430], 00:26:17.648 | 30.00th=[ 464], 40.00th=[ 498], 50.00th=[ 527], 60.00th=[ 592], 00:26:17.648 | 70.00th=[ 693], 80.00th=[ 919], 90.00th=[ 1385], 95.00th=[ 3507], 00:26:17.648 | 99.00th=[ 3608], 99.50th=[ 3608], 99.90th=[ 3608], 99.95th=[ 3608], 00:26:17.648 | 99.99th=[ 3608] 00:26:17.648 bw ( KiB/s): min= 4096, max=314762, per=5.32%, avg=201648.21, stdev=93564.92, samples=14 00:26:17.648 iops : min= 4, max= 307, avg=196.86, stdev=91.32, samples=14 00:26:17.648 lat (msec) : 500=40.44%, 750=31.61%, 1000=11.35%, 2000=8.03%, >=2000=8.57% 00:26:17.648 cpu : usr=0.02%, sys=2.21%, ctx=2436, majf=0, minf=32769 00:26:17.648 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:26:17.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.648 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:17.648 issued rwts: total=1506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.648 job4: (groupid=0, jobs=1): err= 0: pid=393148: Fri Dec 13 19:20:50 2024 00:26:17.648 read: IOPS=51, BW=51.7MiB/s (54.3MB/s)(549MiB/10610msec) 00:26:17.648 slat (usec): min=40, max=2096.2k, avg=19105.47, stdev=168890.23 00:26:17.648 clat (msec): min=119, max=4845, avg=1532.84, stdev=1752.26 00:26:17.648 lat (msec): min=368, max=4846, avg=1551.94, stdev=1760.08 00:26:17.648 clat percentiles (msec): 00:26:17.648 | 1.00th=[ 368], 5.00th=[ 368], 10.00th=[ 372], 20.00th=[ 393], 00:26:17.648 | 30.00th=[ 443], 40.00th=[ 489], 50.00th=[ 527], 60.00th=[ 642], 00:26:17.648 | 70.00th=[ 718], 80.00th=[ 4463], 90.00th=[ 4597], 95.00th=[ 4665], 00:26:17.648 | 99.00th=[ 4732], 99.50th=[ 4799], 99.90th=[ 4866], 99.95th=[ 4866], 00:26:17.648 | 99.99th=[ 4866] 00:26:17.648 bw ( KiB/s): min=10219, max=342016, per=4.55%, avg=172437.40, stdev=129714.67, samples=5 00:26:17.648 iops : min= 9, max= 334, avg=168.20, stdev=126.98, samples=5 00:26:17.648 lat (msec) : 250=0.18%, 500=48.45%, 750=23.50%, 1000=1.09%, >=2000=26.78% 00:26:17.648 cpu : usr=0.02%, sys=0.89%, ctx=656, majf=0, minf=32769 00:26:17.648 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.8%, >=64=88.5% 00:26:17.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.648 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:26:17.648 issued rwts: total=549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.648 job4: (groupid=0, jobs=1): err= 0: pid=393149: Fri Dec 13 19:20:50 2024 00:26:17.648 read: IOPS=15, BW=15.5MiB/s (16.3MB/s)(165MiB/10642msec) 00:26:17.648 slat (usec): min=50, max=2095.7k, avg=63771.21, stdev=326218.80 00:26:17.648 clat (msec): min=119, max=8503, avg=5647.66, stdev=1494.98 00:26:17.648 lat (msec): min=2138, max=8504, avg=5711.43, stdev=1429.08 00:26:17.648 clat percentiles (msec): 00:26:17.648 | 1.00th=[ 2140], 5.00th=[ 2635], 10.00th=[ 2668], 20.00th=[ 4530], 00:26:17.648 | 30.00th=[ 5940], 40.00th=[ 6074], 50.00th=[ 6141], 60.00th=[ 6208], 00:26:17.648 | 70.00th=[ 6275], 80.00th=[ 6409], 90.00th=[ 6477], 95.00th=[ 8356], 00:26:17.648 | 99.00th=[ 8490], 99.50th=[ 8490], 99.90th=[ 8490], 99.95th=[ 8490], 00:26:17.648 | 99.99th=[ 8490] 00:26:17.648 bw ( KiB/s): min= 2000, max=65536, per=0.51%, avg=19444.00, stdev=30774.54, samples=4 00:26:17.648 iops : min= 1, max= 64, avg=18.75, stdev=30.24, samples=4 00:26:17.648 lat (msec) : 250=0.61%, >=2000=99.39% 00:26:17.648 cpu : usr=0.00%, sys=0.92%, ctx=153, majf=0, minf=32769 00:26:17.648 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.8%, 16=9.7%, 32=19.4%, >=64=61.8% 00:26:17.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.648 complete : 0=0.0%, 4=97.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.6% 00:26:17.648 issued rwts: total=165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.648 job4: (groupid=0, jobs=1): err= 0: pid=393150: Fri Dec 13 19:20:50 2024 00:26:17.648 read: IOPS=24, BW=24.9MiB/s (26.2MB/s)(265MiB/10624msec) 00:26:17.648 slat (usec): min=49, max=2093.8k, avg=39661.11, stdev=235661.33 00:26:17.648 clat (msec): min=112, max=6089, avg=4118.99, stdev=1899.40 00:26:17.648 lat (msec): min=733, max=6092, avg=4158.65, stdev=1873.77 00:26:17.648 clat percentiles (msec): 00:26:17.648 | 1.00th=[ 735], 5.00th=[ 743], 10.00th=[ 760], 20.00th=[ 1804], 00:26:17.648 | 30.00th=[ 3910], 40.00th=[ 4178], 50.00th=[ 4463], 60.00th=[ 5470], 00:26:17.648 | 70.00th=[ 5604], 80.00th=[ 5738], 90.00th=[ 5940], 95.00th=[ 6007], 00:26:17.648 | 99.00th=[ 6074], 99.50th=[ 6074], 99.90th=[ 6074], 99.95th=[ 6074], 00:26:17.648 | 99.99th=[ 6074] 00:26:17.648 bw ( KiB/s): min= 2000, max=169984, per=1.24%, avg=47096.00, stdev=65708.78, samples=6 00:26:17.648 iops : min= 1, max= 166, avg=45.83, stdev=64.30, samples=6 00:26:17.648 lat (msec) : 250=0.38%, 750=8.68%, 1000=10.19%, 2000=1.89%, >=2000=78.87% 00:26:17.648 cpu : usr=0.02%, sys=0.87%, ctx=525, majf=0, minf=32769 00:26:17.648 IO depths : 1=0.4%, 2=0.8%, 4=1.5%, 8=3.0%, 16=6.0%, 32=12.1%, >=64=76.2% 00:26:17.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.648 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:26:17.649 issued rwts: total=265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.649 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.649 job4: (groupid=0, jobs=1): err= 0: pid=393151: Fri Dec 13 19:20:50 2024 00:26:17.649 read: IOPS=36, BW=36.4MiB/s (38.2MB/s)(385MiB/10567msec) 00:26:17.649 slat (usec): min=108, max=2100.3k, avg=27153.78, stdev=203933.35 00:26:17.649 clat (msec): min=109, max=5071, avg=2102.68, stdev=1918.24 00:26:17.649 lat (msec): min=578, max=5071, avg=2129.84, stdev=1922.87 00:26:17.649 clat percentiles (msec): 00:26:17.649 | 1.00th=[ 575], 5.00th=[ 584], 10.00th=[ 584], 20.00th=[ 600], 00:26:17.649 | 30.00th=[ 625], 40.00th=[ 709], 50.00th=[ 785], 60.00th=[ 818], 00:26:17.649 | 70.00th=[ 4530], 80.00th=[ 4732], 90.00th=[ 4866], 95.00th=[ 4933], 00:26:17.649 | 99.00th=[ 5067], 99.50th=[ 5067], 99.90th=[ 5067], 99.95th=[ 5067], 00:26:17.649 | 99.99th=[ 5067] 00:26:17.649 bw ( KiB/s): min=10240, max=227328, per=2.78%, avg=105267.20, stdev=106944.04, samples=5 00:26:17.649 iops : min= 10, max= 222, avg=102.80, stdev=104.44, samples=5 00:26:17.649 lat (msec) : 250=0.26%, 750=45.45%, 1000=17.40%, >=2000=36.88% 00:26:17.649 cpu : usr=0.01%, sys=1.17%, ctx=624, majf=0, minf=32769 00:26:17.649 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.2%, 32=8.3%, >=64=83.6% 00:26:17.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.649 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:26:17.649 issued rwts: total=385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.649 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.649 job4: (groupid=0, jobs=1): err= 0: pid=393152: Fri Dec 13 19:20:50 2024 00:26:17.649 read: IOPS=20, BW=20.8MiB/s (21.8MB/s)(221MiB/10625msec) 00:26:17.649 slat (usec): min=58, max=2122.0k, avg=47532.62, stdev=276649.40 00:26:17.649 clat (msec): min=119, max=9442, avg=5656.76, stdev=3730.48 00:26:17.649 lat (msec): min=969, max=9444, avg=5704.29, stdev=3714.62 00:26:17.649 clat percentiles (msec): 00:26:17.649 | 1.00th=[ 961], 5.00th=[ 1028], 10.00th=[ 1116], 20.00th=[ 1250], 00:26:17.649 | 30.00th=[ 1284], 40.00th=[ 3071], 50.00th=[ 8658], 60.00th=[ 8792], 00:26:17.649 | 70.00th=[ 8926], 80.00th=[ 9194], 90.00th=[ 9329], 95.00th=[ 9329], 00:26:17.649 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:26:17.649 | 99.99th=[ 9463] 00:26:17.649 bw ( KiB/s): min= 2048, max=135168, per=0.73%, avg=27501.71, stdev=48245.61, samples=7 00:26:17.649 iops : min= 2, max= 132, avg=26.86, stdev=47.11, samples=7 00:26:17.649 lat (msec) : 250=0.45%, 1000=3.17%, 2000=32.58%, >=2000=63.80% 00:26:17.649 cpu : usr=0.04%, sys=0.88%, ctx=311, majf=0, minf=32769 00:26:17.649 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.2%, 32=14.5%, >=64=71.5% 00:26:17.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.649 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:26:17.649 issued rwts: total=221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.649 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.649 job4: (groupid=0, jobs=1): err= 0: pid=393153: Fri Dec 13 19:20:50 2024 00:26:17.649 read: IOPS=6, BW=6576KiB/s (6734kB/s)(69.0MiB/10745msec) 00:26:17.649 slat (usec): min=472, max=2131.2k, avg=154091.74, stdev=526969.73 00:26:17.649 clat (msec): min=111, max=10740, avg=9351.31, stdev=2681.14 00:26:17.649 lat (msec): min=2142, max=10744, avg=9505.40, stdev=2436.69 00:26:17.649 clat percentiles (msec): 00:26:17.649 | 1.00th=[ 112], 5.00th=[ 2198], 10.00th=[ 4329], 20.00th=[ 8658], 00:26:17.649 | 30.00th=[10402], 40.00th=[10537], 50.00th=[10671], 60.00th=[10671], 00:26:17.649 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:26:17.649 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:26:17.649 | 99.99th=[10805] 00:26:17.649 lat (msec) : 250=1.45%, >=2000=98.55% 00:26:17.649 cpu : usr=0.00%, sys=0.62%, ctx=127, majf=0, minf=17665 00:26:17.649 IO depths : 1=1.4%, 2=2.9%, 4=5.8%, 8=11.6%, 16=23.2%, 32=46.4%, >=64=8.7% 00:26:17.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.649 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:26:17.649 issued rwts: total=69,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.649 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.649 job4: (groupid=0, jobs=1): err= 0: pid=393154: Fri Dec 13 19:20:50 2024 00:26:17.649 read: IOPS=46, BW=46.9MiB/s (49.2MB/s)(470MiB/10014msec) 00:26:17.649 slat (usec): min=40, max=2084.7k, avg=21273.71, stdev=185356.43 00:26:17.649 clat (msec): min=12, max=9052, avg=513.91, stdev=1159.12 00:26:17.649 lat (msec): min=14, max=9084, avg=535.18, stdev=1224.85 00:26:17.649 clat percentiles (msec): 00:26:17.649 | 1.00th=[ 17], 5.00th=[ 33], 10.00th=[ 50], 20.00th=[ 100], 00:26:17.649 | 30.00th=[ 125], 40.00th=[ 234], 50.00th=[ 380], 60.00th=[ 489], 00:26:17.649 | 70.00th=[ 502], 80.00th=[ 514], 90.00th=[ 523], 95.00th=[ 634], 00:26:17.649 | 99.00th=[ 8926], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:26:17.649 | 99.99th=[ 9060] 00:26:17.649 bw ( KiB/s): min=219136, max=219136, per=5.78%, avg=219136.00, stdev= 0.00, samples=1 00:26:17.649 iops : min= 214, max= 214, avg=214.00, stdev= 0.00, samples=1 00:26:17.649 lat (msec) : 20=1.91%, 50=8.30%, 100=9.79%, 250=21.06%, 500=27.66% 00:26:17.649 lat (msec) : 750=27.66%, 1000=0.21%, >=2000=3.40% 00:26:17.649 cpu : usr=0.00%, sys=1.00%, ctx=1456, majf=0, minf=32769 00:26:17.649 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.6% 00:26:17.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.649 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:26:17.649 issued rwts: total=470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.649 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.649 job4: (groupid=0, jobs=1): err= 0: pid=393155: Fri Dec 13 19:20:50 2024 00:26:17.649 read: IOPS=184, BW=184MiB/s (193MB/s)(1977MiB/10735msec) 00:26:17.649 slat (usec): min=40, max=4254.8k, avg=5367.08, stdev=95747.43 00:26:17.649 clat (msec): min=111, max=5309, avg=669.46, stdev=1092.15 00:26:17.649 lat (msec): min=254, max=5345, avg=674.83, stdev=1096.62 00:26:17.649 clat percentiles (msec): 00:26:17.649 | 1.00th=[ 255], 5.00th=[ 257], 10.00th=[ 259], 20.00th=[ 259], 00:26:17.649 | 30.00th=[ 264], 40.00th=[ 268], 50.00th=[ 384], 60.00th=[ 393], 00:26:17.649 | 70.00th=[ 397], 80.00th=[ 443], 90.00th=[ 1028], 95.00th=[ 4463], 00:26:17.649 | 99.00th=[ 5067], 99.50th=[ 5201], 99.90th=[ 5336], 99.95th=[ 5336], 00:26:17.649 | 99.99th=[ 5336] 00:26:17.649 bw ( KiB/s): min=61440, max=501760, per=8.32%, avg=315562.67, stdev=151213.12, samples=12 00:26:17.649 iops : min= 60, max= 490, avg=308.17, stdev=147.67, samples=12 00:26:17.649 lat (msec) : 250=0.05%, 500=83.71%, 750=3.44%, 1000=1.97%, 2000=4.40% 00:26:17.649 lat (msec) : >=2000=6.42% 00:26:17.649 cpu : usr=0.12%, sys=2.66%, ctx=2125, majf=0, minf=32769 00:26:17.649 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:26:17.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.649 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:17.649 issued rwts: total=1977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.649 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.649 job4: (groupid=0, jobs=1): err= 0: pid=393156: Fri Dec 13 19:20:50 2024 00:26:17.649 read: IOPS=2, BW=2507KiB/s (2567kB/s)(26.0MiB/10619msec) 00:26:17.649 slat (usec): min=427, max=2163.1k, avg=403830.15, stdev=808088.98 00:26:17.649 clat (msec): min=119, max=10615, avg=7210.37, stdev=3650.10 00:26:17.649 lat (msec): min=2119, max=10618, avg=7614.20, stdev=3406.89 00:26:17.649 clat percentiles (msec): 00:26:17.649 | 1.00th=[ 120], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 2232], 00:26:17.649 | 30.00th=[ 4329], 40.00th=[ 8557], 50.00th=[ 8658], 60.00th=[ 8658], 00:26:17.649 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:26:17.649 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:26:17.649 | 99.99th=[10671] 00:26:17.649 lat (msec) : 250=3.85%, >=2000=96.15% 00:26:17.649 cpu : usr=0.00%, sys=0.15%, ctx=67, majf=0, minf=6657 00:26:17.649 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:26:17.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.649 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:17.649 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.649 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.649 job4: (groupid=0, jobs=1): err= 0: pid=393157: Fri Dec 13 19:20:50 2024 00:26:17.649 read: IOPS=58, BW=58.1MiB/s (60.9MB/s)(582MiB/10017msec) 00:26:17.649 slat (usec): min=49, max=2078.8k, avg=17180.37, stdev=166184.09 00:26:17.649 clat (msec): min=15, max=8976, avg=1111.18, stdev=2455.46 00:26:17.649 lat (msec): min=18, max=8983, avg=1128.36, stdev=2476.87 00:26:17.649 clat percentiles (msec): 00:26:17.649 | 1.00th=[ 26], 5.00th=[ 73], 10.00th=[ 142], 20.00th=[ 253], 00:26:17.649 | 30.00th=[ 262], 40.00th=[ 264], 50.00th=[ 266], 60.00th=[ 275], 00:26:17.649 | 70.00th=[ 334], 80.00th=[ 401], 90.00th=[ 4799], 95.00th=[ 8926], 00:26:17.649 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:26:17.649 | 99.99th=[ 8926] 00:26:17.649 bw ( KiB/s): min=47104, max=483328, per=8.19%, avg=310613.33, stdev=231852.54, samples=3 00:26:17.649 iops : min= 46, max= 472, avg=303.33, stdev=226.42, samples=3 00:26:17.649 lat (msec) : 20=0.52%, 50=2.92%, 100=3.78%, 250=11.86%, 500=65.46% 00:26:17.649 lat (msec) : 750=4.12%, >=2000=11.34% 00:26:17.649 cpu : usr=0.00%, sys=1.45%, ctx=1361, majf=0, minf=32769 00:26:17.649 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.2% 00:26:17.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.649 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:26:17.649 issued rwts: total=582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.649 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.649 job4: (groupid=0, jobs=1): err= 0: pid=393158: Fri Dec 13 19:20:50 2024 00:26:17.649 read: IOPS=15, BW=15.9MiB/s (16.7MB/s)(168MiB/10575msec) 00:26:17.649 slat (usec): min=565, max=2123.6k, avg=62231.36, stdev=327245.13 00:26:17.649 clat (msec): min=118, max=8909, avg=2018.88, stdev=1731.52 00:26:17.649 lat (msec): min=439, max=8912, avg=2081.12, stdev=1800.97 00:26:17.649 clat percentiles (msec): 00:26:17.649 | 1.00th=[ 439], 5.00th=[ 464], 10.00th=[ 502], 20.00th=[ 1552], 00:26:17.649 | 30.00th=[ 1620], 40.00th=[ 1670], 50.00th=[ 1703], 60.00th=[ 1754], 00:26:17.649 | 70.00th=[ 1821], 80.00th=[ 1871], 90.00th=[ 2702], 95.00th=[ 7013], 00:26:17.649 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:26:17.649 | 99.99th=[ 8926] 00:26:17.649 bw ( KiB/s): min=26624, max=55296, per=1.08%, avg=40960.00, stdev=20274.17, samples=2 00:26:17.649 iops : min= 26, max= 54, avg=40.00, stdev=19.80, samples=2 00:26:17.649 lat (msec) : 250=0.60%, 500=8.93%, 750=4.76%, 2000=75.60%, >=2000=10.12% 00:26:17.649 cpu : usr=0.00%, sys=0.67%, ctx=542, majf=0, minf=32769 00:26:17.649 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.8%, 16=9.5%, 32=19.0%, >=64=62.5% 00:26:17.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.649 complete : 0=0.0%, 4=97.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.4% 00:26:17.650 issued rwts: total=168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.650 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.650 job4: (groupid=0, jobs=1): err= 0: pid=393159: Fri Dec 13 19:20:50 2024 00:26:17.650 read: IOPS=144, BW=144MiB/s (151MB/s)(1532MiB/10620msec) 00:26:17.650 slat (usec): min=40, max=2060.9k, avg=6851.12, stdev=78904.84 00:26:17.650 clat (msec): min=116, max=4802, avg=578.18, stdev=609.80 00:26:17.650 lat (msec): min=255, max=4805, avg=585.03, stdev=619.51 00:26:17.650 clat percentiles (msec): 00:26:17.650 | 1.00th=[ 257], 5.00th=[ 262], 10.00th=[ 266], 20.00th=[ 384], 00:26:17.650 | 30.00th=[ 388], 40.00th=[ 388], 50.00th=[ 397], 60.00th=[ 405], 00:26:17.650 | 70.00th=[ 435], 80.00th=[ 477], 90.00th=[ 818], 95.00th=[ 1687], 00:26:17.650 | 99.00th=[ 4665], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:26:17.650 | 99.99th=[ 4799] 00:26:17.650 bw ( KiB/s): min= 2048, max=495616, per=6.90%, avg=261534.09, stdev=143109.02, samples=11 00:26:17.650 iops : min= 2, max= 484, avg=255.36, stdev=139.75, samples=11 00:26:17.650 lat (msec) : 250=0.07%, 500=82.18%, 750=5.94%, 1000=1.83%, 2000=8.29% 00:26:17.650 lat (msec) : >=2000=1.70% 00:26:17.650 cpu : usr=0.07%, sys=1.97%, ctx=1540, majf=0, minf=32769 00:26:17.650 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:26:17.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.650 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:17.650 issued rwts: total=1532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.650 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.650 job5: (groupid=0, jobs=1): err= 0: pid=393160: Fri Dec 13 19:20:50 2024 00:26:17.650 read: IOPS=157, BW=157MiB/s (165MB/s)(1573MiB/10014msec) 00:26:17.650 slat (usec): min=42, max=2057.3k, avg=6354.03, stdev=69684.45 00:26:17.650 clat (msec): min=13, max=2901, avg=626.45, stdev=694.31 00:26:17.650 lat (msec): min=14, max=2904, avg=632.81, stdev=699.13 00:26:17.650 clat percentiles (msec): 00:26:17.650 | 1.00th=[ 28], 5.00th=[ 90], 10.00th=[ 171], 20.00th=[ 222], 00:26:17.650 | 30.00th=[ 284], 40.00th=[ 439], 50.00th=[ 531], 60.00th=[ 550], 00:26:17.650 | 70.00th=[ 567], 80.00th=[ 625], 90.00th=[ 651], 95.00th=[ 2802], 00:26:17.650 | 99.00th=[ 2903], 99.50th=[ 2903], 99.90th=[ 2903], 99.95th=[ 2903], 00:26:17.650 | 99.99th=[ 2903] 00:26:17.650 bw ( KiB/s): min=59392, max=411648, per=5.50%, avg=208523.64, stdev=101287.29, samples=11 00:26:17.650 iops : min= 58, max= 402, avg=203.64, stdev=98.91, samples=11 00:26:17.650 lat (msec) : 20=0.45%, 50=2.03%, 100=3.12%, 250=19.64%, 500=20.22% 00:26:17.650 lat (msec) : 750=46.03%, >=2000=8.52% 00:26:17.650 cpu : usr=0.05%, sys=1.68%, ctx=2783, majf=0, minf=32769 00:26:17.650 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:26:17.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.650 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:17.650 issued rwts: total=1573,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.650 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.650 job5: (groupid=0, jobs=1): err= 0: pid=393161: Fri Dec 13 19:20:50 2024 00:26:17.650 read: IOPS=82, BW=83.0MiB/s (87.0MB/s)(831MiB/10017msec) 00:26:17.650 slat (usec): min=39, max=2160.0k, avg=12030.01, stdev=75688.02 00:26:17.650 clat (msec): min=15, max=4948, avg=1420.72, stdev=1087.66 00:26:17.650 lat (msec): min=17, max=5046, avg=1432.75, stdev=1093.38 00:26:17.650 clat percentiles (msec): 00:26:17.650 | 1.00th=[ 39], 5.00th=[ 161], 10.00th=[ 372], 20.00th=[ 456], 00:26:17.650 | 30.00th=[ 518], 40.00th=[ 844], 50.00th=[ 1133], 60.00th=[ 1536], 00:26:17.650 | 70.00th=[ 1871], 80.00th=[ 2198], 90.00th=[ 3205], 95.00th=[ 3675], 00:26:17.650 | 99.00th=[ 4111], 99.50th=[ 4178], 99.90th=[ 4933], 99.95th=[ 4933], 00:26:17.650 | 99.99th=[ 4933] 00:26:17.650 bw ( KiB/s): min=22528, max=278528, per=2.42%, avg=91687.38, stdev=71470.77, samples=13 00:26:17.650 iops : min= 22, max= 272, avg=89.54, stdev=69.80, samples=13 00:26:17.650 lat (msec) : 20=0.24%, 50=1.44%, 100=1.93%, 250=3.49%, 500=19.49% 00:26:17.650 lat (msec) : 750=10.47%, 1000=8.66%, 2000=25.39%, >=2000=28.88% 00:26:17.650 cpu : usr=0.07%, sys=1.82%, ctx=1996, majf=0, minf=32769 00:26:17.650 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.4% 00:26:17.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.650 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:17.650 issued rwts: total=831,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.650 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.650 job5: (groupid=0, jobs=1): err= 0: pid=393162: Fri Dec 13 19:20:50 2024 00:26:17.650 read: IOPS=84, BW=84.9MiB/s (89.0MB/s)(851MiB/10024msec) 00:26:17.650 slat (usec): min=51, max=2062.3k, avg=11754.81, stdev=95206.45 00:26:17.650 clat (msec): min=16, max=6481, avg=1279.17, stdev=1596.19 00:26:17.650 lat (msec): min=28, max=6493, avg=1290.93, stdev=1607.83 00:26:17.650 clat percentiles (msec): 00:26:17.650 | 1.00th=[ 44], 5.00th=[ 138], 10.00th=[ 239], 20.00th=[ 262], 00:26:17.650 | 30.00th=[ 300], 40.00th=[ 334], 50.00th=[ 472], 60.00th=[ 768], 00:26:17.650 | 70.00th=[ 1183], 80.00th=[ 1368], 90.00th=[ 4329], 95.00th=[ 4597], 00:26:17.650 | 99.00th=[ 6409], 99.50th=[ 6477], 99.90th=[ 6477], 99.95th=[ 6477], 00:26:17.650 | 99.99th=[ 6477] 00:26:17.650 bw ( KiB/s): min=10240, max=403456, per=3.55%, avg=134698.55, stdev=140970.95, samples=11 00:26:17.650 iops : min= 10, max= 394, avg=131.36, stdev=137.70, samples=11 00:26:17.650 lat (msec) : 20=0.12%, 50=1.18%, 100=2.35%, 250=12.81%, 500=35.37% 00:26:17.650 lat (msec) : 750=7.87%, 1000=4.11%, 2000=16.57%, >=2000=19.62% 00:26:17.650 cpu : usr=0.05%, sys=1.84%, ctx=2733, majf=0, minf=32769 00:26:17.650 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.8%, >=64=92.6% 00:26:17.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.650 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:17.650 issued rwts: total=851,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.650 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.650 job5: (groupid=0, jobs=1): err= 0: pid=393163: Fri Dec 13 19:20:50 2024 00:26:17.650 read: IOPS=37, BW=37.6MiB/s (39.5MB/s)(380MiB/10093msec) 00:26:17.650 slat (usec): min=513, max=2050.5k, avg=26317.19, stdev=140014.77 00:26:17.650 clat (msec): min=89, max=8447, avg=2512.00, stdev=1435.63 00:26:17.650 lat (msec): min=125, max=8497, avg=2538.32, stdev=1454.21 00:26:17.650 clat percentiles (msec): 00:26:17.650 | 1.00th=[ 167], 5.00th=[ 506], 10.00th=[ 953], 20.00th=[ 1284], 00:26:17.650 | 30.00th=[ 1334], 40.00th=[ 1737], 50.00th=[ 2299], 60.00th=[ 2534], 00:26:17.650 | 70.00th=[ 3507], 80.00th=[ 4077], 90.00th=[ 4463], 95.00th=[ 4665], 00:26:17.650 | 99.00th=[ 4799], 99.50th=[ 8423], 99.90th=[ 8423], 99.95th=[ 8423], 00:26:17.650 | 99.99th=[ 8423] 00:26:17.650 bw ( KiB/s): min=28672, max=88064, per=1.24%, avg=46935.55, stdev=19792.44, samples=11 00:26:17.650 iops : min= 28, max= 86, avg=45.82, stdev=19.34, samples=11 00:26:17.650 lat (msec) : 100=0.26%, 250=1.58%, 500=2.89%, 750=2.11%, 1000=3.68% 00:26:17.650 lat (msec) : 2000=32.89%, >=2000=56.58% 00:26:17.650 cpu : usr=0.01%, sys=1.06%, ctx=1188, majf=0, minf=32769 00:26:17.650 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.2%, 32=8.4%, >=64=83.4% 00:26:17.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.650 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:26:17.650 issued rwts: total=380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.650 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.650 job5: (groupid=0, jobs=1): err= 0: pid=393164: Fri Dec 13 19:20:50 2024 00:26:17.650 read: IOPS=67, BW=67.8MiB/s (71.1MB/s)(687MiB/10133msec) 00:26:17.650 slat (usec): min=42, max=2089.9k, avg=14634.65, stdev=105908.45 00:26:17.650 clat (msec): min=75, max=4952, avg=1285.23, stdev=999.52 00:26:17.650 lat (msec): min=173, max=4980, avg=1299.87, stdev=1008.37 00:26:17.650 clat percentiles (msec): 00:26:17.650 | 1.00th=[ 435], 5.00th=[ 542], 10.00th=[ 558], 20.00th=[ 575], 00:26:17.650 | 30.00th=[ 592], 40.00th=[ 751], 50.00th=[ 1070], 60.00th=[ 1267], 00:26:17.650 | 70.00th=[ 1536], 80.00th=[ 1770], 90.00th=[ 1854], 95.00th=[ 4799], 00:26:17.650 | 99.00th=[ 4933], 99.50th=[ 4933], 99.90th=[ 4933], 99.95th=[ 4933], 00:26:17.650 | 99.99th=[ 4933] 00:26:17.650 bw ( KiB/s): min=18318, max=243712, per=2.74%, avg=104045.91, stdev=74834.88, samples=11 00:26:17.650 iops : min= 17, max= 238, avg=101.45, stdev=73.18, samples=11 00:26:17.650 lat (msec) : 100=0.15%, 250=0.15%, 500=0.87%, 750=38.57%, 1000=8.88% 00:26:17.650 lat (msec) : 2000=44.25%, >=2000=7.13% 00:26:17.650 cpu : usr=0.00%, sys=1.47%, ctx=1348, majf=0, minf=32769 00:26:17.650 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.7%, >=64=90.8% 00:26:17.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.650 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:26:17.650 issued rwts: total=687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.650 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.650 job5: (groupid=0, jobs=1): err= 0: pid=393165: Fri Dec 13 19:20:50 2024 00:26:17.650 read: IOPS=29, BW=29.6MiB/s (31.1MB/s)(298MiB/10059msec) 00:26:17.650 slat (usec): min=38, max=4225.5k, avg=33627.11, stdev=264517.81 00:26:17.650 clat (msec): min=35, max=7367, avg=1772.17, stdev=1400.19 00:26:17.650 lat (msec): min=71, max=7416, avg=1805.80, stdev=1434.82 00:26:17.650 clat percentiles (msec): 00:26:17.650 | 1.00th=[ 80], 5.00th=[ 460], 10.00th=[ 609], 20.00th=[ 1116], 00:26:17.650 | 30.00th=[ 1284], 40.00th=[ 1485], 50.00th=[ 1620], 60.00th=[ 1670], 00:26:17.650 | 70.00th=[ 1720], 80.00th=[ 1955], 90.00th=[ 2165], 95.00th=[ 5470], 00:26:17.650 | 99.00th=[ 7349], 99.50th=[ 7349], 99.90th=[ 7349], 99.95th=[ 7349], 00:26:17.650 | 99.99th=[ 7349] 00:26:17.650 bw ( KiB/s): min=36864, max=106496, per=1.53%, avg=58113.83, stdev=25851.30, samples=6 00:26:17.650 iops : min= 36, max= 104, avg=56.67, stdev=25.29, samples=6 00:26:17.650 lat (msec) : 50=0.34%, 100=1.01%, 250=1.01%, 500=4.36%, 750=5.03% 00:26:17.650 lat (msec) : 1000=2.68%, 2000=68.12%, >=2000=17.45% 00:26:17.650 cpu : usr=0.00%, sys=1.10%, ctx=950, majf=0, minf=32769 00:26:17.650 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.7%, 16=5.4%, 32=10.7%, >=64=78.9% 00:26:17.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.650 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:26:17.650 issued rwts: total=298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.650 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.650 job5: (groupid=0, jobs=1): err= 0: pid=393166: Fri Dec 13 19:20:50 2024 00:26:17.650 read: IOPS=37, BW=37.4MiB/s (39.2MB/s)(376MiB/10045msec) 00:26:17.650 slat (usec): min=50, max=3630.3k, avg=26656.17, stdev=188160.67 00:26:17.650 clat (msec): min=19, max=8098, avg=3155.11, stdev=2750.80 00:26:17.650 lat (msec): min=62, max=8112, avg=3181.76, stdev=2756.73 00:26:17.650 clat percentiles (msec): 00:26:17.650 | 1.00th=[ 80], 5.00th=[ 785], 10.00th=[ 793], 20.00th=[ 835], 00:26:17.650 | 30.00th=[ 894], 40.00th=[ 1036], 50.00th=[ 1301], 60.00th=[ 3071], 00:26:17.650 | 70.00th=[ 5269], 80.00th=[ 6477], 90.00th=[ 7617], 95.00th=[ 7819], 00:26:17.651 | 99.00th=[ 8087], 99.50th=[ 8087], 99.90th=[ 8087], 99.95th=[ 8087], 00:26:17.651 | 99.99th=[ 8087] 00:26:17.651 bw ( KiB/s): min=16384, max=133120, per=1.12%, avg=42313.08, stdev=40679.37, samples=12 00:26:17.651 iops : min= 16, max= 130, avg=41.25, stdev=39.76, samples=12 00:26:17.651 lat (msec) : 20=0.27%, 100=1.06%, 250=1.06%, 500=1.06%, 750=1.06% 00:26:17.651 lat (msec) : 1000=34.31%, 2000=14.89%, >=2000=46.28% 00:26:17.651 cpu : usr=0.04%, sys=1.07%, ctx=1104, majf=0, minf=32769 00:26:17.651 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.3%, 32=8.5%, >=64=83.2% 00:26:17.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.651 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:26:17.651 issued rwts: total=376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.651 job5: (groupid=0, jobs=1): err= 0: pid=393167: Fri Dec 13 19:20:50 2024 00:26:17.651 read: IOPS=189, BW=190MiB/s (199MB/s)(1906MiB/10033msec) 00:26:17.651 slat (usec): min=35, max=2037.1k, avg=5241.39, stdev=47199.14 00:26:17.651 clat (msec): min=32, max=3776, avg=638.28, stdev=796.54 00:26:17.651 lat (msec): min=34, max=3800, avg=643.52, stdev=801.07 00:26:17.651 clat percentiles (msec): 00:26:17.651 | 1.00th=[ 72], 5.00th=[ 232], 10.00th=[ 264], 20.00th=[ 266], 00:26:17.651 | 30.00th=[ 271], 40.00th=[ 393], 50.00th=[ 397], 60.00th=[ 418], 00:26:17.651 | 70.00th=[ 523], 80.00th=[ 575], 90.00th=[ 1334], 95.00th=[ 3205], 00:26:17.651 | 99.00th=[ 3742], 99.50th=[ 3742], 99.90th=[ 3775], 99.95th=[ 3775], 00:26:17.651 | 99.99th=[ 3775] 00:26:17.651 bw ( KiB/s): min=38912, max=486451, per=6.40%, avg=242784.60, stdev=153000.80, samples=15 00:26:17.651 iops : min= 38, max= 475, avg=237.07, stdev=149.40, samples=15 00:26:17.651 lat (msec) : 50=0.47%, 100=1.26%, 250=3.73%, 500=62.33%, 750=20.30% 00:26:17.651 lat (msec) : 1000=1.42%, 2000=3.83%, >=2000=6.66% 00:26:17.651 cpu : usr=0.06%, sys=2.90%, ctx=1983, majf=0, minf=32769 00:26:17.651 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:26:17.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.651 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:17.651 issued rwts: total=1906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.651 job5: (groupid=0, jobs=1): err= 0: pid=393168: Fri Dec 13 19:20:50 2024 00:26:17.651 read: IOPS=194, BW=195MiB/s (204MB/s)(2069MiB/10626msec) 00:26:17.651 slat (usec): min=40, max=2011.1k, avg=5073.79, stdev=44819.26 00:26:17.651 clat (msec): min=115, max=2598, avg=612.39, stdev=576.22 00:26:17.651 lat (msec): min=324, max=2602, avg=617.46, stdev=578.20 00:26:17.651 clat percentiles (msec): 00:26:17.651 | 1.00th=[ 334], 5.00th=[ 338], 10.00th=[ 338], 20.00th=[ 359], 00:26:17.651 | 30.00th=[ 372], 40.00th=[ 388], 50.00th=[ 397], 60.00th=[ 409], 00:26:17.651 | 70.00th=[ 460], 80.00th=[ 498], 90.00th=[ 1653], 95.00th=[ 2333], 00:26:17.651 | 99.00th=[ 2567], 99.50th=[ 2567], 99.90th=[ 2601], 99.95th=[ 2601], 00:26:17.651 | 99.99th=[ 2601] 00:26:17.651 bw ( KiB/s): min= 2000, max=348160, per=6.99%, avg=265074.13, stdev=108136.88, samples=15 00:26:17.651 iops : min= 1, max= 340, avg=258.73, stdev=105.77, samples=15 00:26:17.651 lat (msec) : 250=0.05%, 500=81.83%, 750=5.07%, 1000=0.82%, 2000=5.36% 00:26:17.651 lat (msec) : >=2000=6.86% 00:26:17.651 cpu : usr=0.11%, sys=2.50%, ctx=2100, majf=0, minf=32769 00:26:17.651 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:26:17.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.651 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:17.651 issued rwts: total=2069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.651 job5: (groupid=0, jobs=1): err= 0: pid=393169: Fri Dec 13 19:20:50 2024 00:26:17.651 read: IOPS=140, BW=141MiB/s (148MB/s)(1414MiB/10041msec) 00:26:17.651 slat (usec): min=39, max=1781.9k, avg=7081.18, stdev=60400.87 00:26:17.651 clat (msec): min=19, max=2494, avg=729.31, stdev=534.23 00:26:17.651 lat (msec): min=64, max=2495, avg=736.39, stdev=537.38 00:26:17.651 clat percentiles (msec): 00:26:17.651 | 1.00th=[ 368], 5.00th=[ 372], 10.00th=[ 376], 20.00th=[ 393], 00:26:17.651 | 30.00th=[ 430], 40.00th=[ 514], 50.00th=[ 523], 60.00th=[ 535], 00:26:17.651 | 70.00th=[ 575], 80.00th=[ 743], 90.00th=[ 1787], 95.00th=[ 2022], 00:26:17.651 | 99.00th=[ 2467], 99.50th=[ 2467], 99.90th=[ 2467], 99.95th=[ 2500], 00:26:17.651 | 99.99th=[ 2500] 00:26:17.651 bw ( KiB/s): min=26624, max=339968, per=4.96%, avg=188104.36, stdev=110750.12, samples=14 00:26:17.651 iops : min= 26, max= 332, avg=183.57, stdev=108.31, samples=14 00:26:17.651 lat (msec) : 20=0.07%, 100=0.28%, 250=0.21%, 500=36.28%, 750=43.35% 00:26:17.651 lat (msec) : 1000=0.92%, 2000=13.08%, >=2000=5.80% 00:26:17.651 cpu : usr=0.15%, sys=2.29%, ctx=1468, majf=0, minf=32769 00:26:17.651 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:26:17.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.651 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:17.651 issued rwts: total=1414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.651 job5: (groupid=0, jobs=1): err= 0: pid=393170: Fri Dec 13 19:20:50 2024 00:26:17.651 read: IOPS=88, BW=88.7MiB/s (93.0MB/s)(888MiB/10012msec) 00:26:17.651 slat (usec): min=377, max=2066.9k, avg=11257.81, stdev=92736.81 00:26:17.651 clat (msec): min=11, max=3408, avg=1124.92, stdev=940.19 00:26:17.651 lat (msec): min=12, max=3410, avg=1136.18, stdev=944.42 00:26:17.651 clat percentiles (msec): 00:26:17.651 | 1.00th=[ 25], 5.00th=[ 93], 10.00th=[ 284], 20.00th=[ 550], 00:26:17.651 | 30.00th=[ 575], 40.00th=[ 600], 50.00th=[ 609], 60.00th=[ 1062], 00:26:17.651 | 70.00th=[ 1200], 80.00th=[ 1536], 90.00th=[ 3071], 95.00th=[ 3306], 00:26:17.651 | 99.00th=[ 3406], 99.50th=[ 3406], 99.90th=[ 3406], 99.95th=[ 3406], 00:26:17.651 | 99.99th=[ 3406] 00:26:17.651 bw ( KiB/s): min=20480, max=237568, per=3.13%, avg=118784.00, stdev=76552.48, samples=11 00:26:17.651 iops : min= 20, max= 232, avg=116.00, stdev=74.76, samples=11 00:26:17.651 lat (msec) : 20=0.68%, 50=1.91%, 100=2.82%, 250=3.83%, 500=4.62% 00:26:17.651 lat (msec) : 750=40.88%, 1000=2.03%, 2000=28.27%, >=2000=14.98% 00:26:17.651 cpu : usr=0.01%, sys=1.29%, ctx=2244, majf=0, minf=32769 00:26:17.651 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:26:17.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.651 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:17.651 issued rwts: total=888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.651 job5: (groupid=0, jobs=1): err= 0: pid=393171: Fri Dec 13 19:20:50 2024 00:26:17.651 read: IOPS=82, BW=82.5MiB/s (86.5MB/s)(826MiB/10012msec) 00:26:17.651 slat (usec): min=40, max=2161.7k, avg=12102.87, stdev=97716.03 00:26:17.651 clat (msec): min=10, max=5965, avg=1474.99, stdev=1817.50 00:26:17.651 lat (msec): min=11, max=5971, avg=1487.09, stdev=1827.02 00:26:17.651 clat percentiles (msec): 00:26:17.651 | 1.00th=[ 20], 5.00th=[ 77], 10.00th=[ 309], 20.00th=[ 393], 00:26:17.651 | 30.00th=[ 405], 40.00th=[ 443], 50.00th=[ 518], 60.00th=[ 751], 00:26:17.651 | 70.00th=[ 1385], 80.00th=[ 1955], 90.00th=[ 5604], 95.00th=[ 5738], 00:26:17.651 | 99.00th=[ 5940], 99.50th=[ 5940], 99.90th=[ 5940], 99.95th=[ 5940], 00:26:17.651 | 99.99th=[ 5940] 00:26:17.651 bw ( KiB/s): min= 6144, max=315392, per=2.50%, avg=94989.08, stdev=99914.30, samples=13 00:26:17.651 iops : min= 6, max= 308, avg=92.69, stdev=97.61, samples=13 00:26:17.651 lat (msec) : 20=1.09%, 50=2.06%, 100=3.51%, 250=2.66%, 500=38.62% 00:26:17.651 lat (msec) : 750=11.99%, 1000=3.39%, 2000=17.68%, >=2000=19.01% 00:26:17.651 cpu : usr=0.03%, sys=1.31%, ctx=1515, majf=0, minf=32769 00:26:17.651 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.4% 00:26:17.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.651 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:17.651 issued rwts: total=826,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.651 job5: (groupid=0, jobs=1): err= 0: pid=393172: Fri Dec 13 19:20:50 2024 00:26:17.651 read: IOPS=17, BW=17.3MiB/s (18.2MB/s)(184MiB/10612msec) 00:26:17.651 slat (usec): min=1038, max=2052.4k, avg=57037.40, stdev=269321.90 00:26:17.651 clat (msec): min=115, max=7839, avg=2930.86, stdev=1311.55 00:26:17.651 lat (msec): min=1665, max=8027, avg=2987.89, stdev=1348.40 00:26:17.651 clat percentiles (msec): 00:26:17.651 | 1.00th=[ 1670], 5.00th=[ 1720], 10.00th=[ 1770], 20.00th=[ 2022], 00:26:17.651 | 30.00th=[ 2106], 40.00th=[ 2400], 50.00th=[ 2668], 60.00th=[ 2802], 00:26:17.651 | 70.00th=[ 3004], 80.00th=[ 3272], 90.00th=[ 5873], 95.00th=[ 6208], 00:26:17.651 | 99.00th=[ 6342], 99.50th=[ 7819], 99.90th=[ 7819], 99.95th=[ 7819], 00:26:17.651 | 99.99th=[ 7819] 00:26:17.651 bw ( KiB/s): min= 2048, max=53248, per=0.77%, avg=29160.25, stdev=24857.13, samples=4 00:26:17.651 iops : min= 2, max= 52, avg=28.25, stdev=24.06, samples=4 00:26:17.651 lat (msec) : 250=0.54%, 2000=17.39%, >=2000=82.07% 00:26:17.651 cpu : usr=0.00%, sys=0.95%, ctx=589, majf=0, minf=32769 00:26:17.651 IO depths : 1=0.5%, 2=1.1%, 4=2.2%, 8=4.3%, 16=8.7%, 32=17.4%, >=64=65.8% 00:26:17.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.651 complete : 0=0.0%, 4=98.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.7% 00:26:17.651 issued rwts: total=184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:17.651 00:26:17.651 Run status group 0 (all jobs): 00:26:17.652 READ: bw=3704MiB/s (3884MB/s), 1631KiB/s-218MiB/s (1670kB/s-229MB/s), io=38.9GiB (41.8GB), run=10012-10755msec 00:26:17.652 00:26:17.652 Disk stats (read/write): 00:26:17.652 nvme0n1: ios=44606/0, merge=0/0, ticks=8052501/0, in_queue=8052501, util=98.17% 00:26:17.652 nvme1n1: ios=29731/0, merge=0/0, ticks=5727349/0, in_queue=5727349, util=98.40% 00:26:17.652 nvme2n1: ios=51613/0, merge=0/0, ticks=7027829/0, in_queue=7027829, util=98.60% 00:26:17.652 nvme3n1: ios=28628/0, merge=0/0, ticks=7086191/0, in_queue=7086191, util=98.67% 00:26:17.652 nvme4n1: ios=63031/0, merge=0/0, ticks=6492640/0, in_queue=6492640, util=99.01% 00:26:17.652 nvme5n1: ios=98127/0, merge=0/0, ticks=7420286/0, in_queue=7420286, util=99.08% 00:26:17.652 19:20:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:26:17.652 19:20:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:26:17.652 19:20:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:26:17.652 19:20:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:26:17.911 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:26:17.911 19:20:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:26:17.911 19:20:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:26:17.911 19:20:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:17.911 19:20:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000000 00:26:17.911 19:20:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:17.911 19:20:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000000 00:26:17.911 19:20:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:26:17.911 19:20:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:17.911 19:20:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.911 19:20:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:17.911 19:20:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.911 19:20:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:26:17.911 19:20:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:18.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:18.849 19:20:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:26:18.849 19:20:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:26:18.849 19:20:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:18.849 19:20:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000001 00:26:18.849 19:20:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:18.849 19:20:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000001 00:26:18.849 19:20:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:26:18.849 19:20:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:18.849 19:20:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.849 19:20:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:18.849 19:20:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.849 19:20:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:26:18.849 19:20:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:19.785 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:19.785 19:20:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:26:19.785 19:20:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:26:19.785 19:20:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:19.785 19:20:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000002 00:26:19.785 19:20:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000002 00:26:19.785 19:20:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:19.785 19:20:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:26:19.785 19:20:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:19.785 19:20:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.785 19:20:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:19.785 19:20:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.785 19:20:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:26:19.785 19:20:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:20.723 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:20.723 19:20:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:26:20.723 19:20:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:26:20.723 19:20:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:20.723 19:20:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000003 00:26:20.980 19:20:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000003 00:26:20.981 19:20:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:20.981 19:20:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:26:20.981 19:20:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:20.981 19:20:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.981 19:20:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:20.981 19:20:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.981 19:20:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:26:20.981 19:20:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:21.916 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:21.916 19:20:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:26:21.916 19:20:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:26:21.916 19:20:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:21.916 19:20:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000004 00:26:21.916 19:20:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:21.916 19:20:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000004 00:26:21.916 19:20:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:26:21.916 19:20:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:21.916 19:20:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.916 19:20:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:21.916 19:20:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.916 19:20:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:26:21.916 19:20:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:22.854 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:22.854 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:26:22.854 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:26:22.854 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:22.854 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000005 00:26:22.854 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:22.854 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000005 00:26:22.854 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:26:22.854 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:22.854 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.854 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:22.854 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.854 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:26:22.854 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:26:22.854 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:22.854 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:26:22.854 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:22.854 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:22.854 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:26:22.854 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:22.854 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:22.854 rmmod nvme_rdma 00:26:22.854 rmmod nvme_fabrics 00:26:23.114 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:23.114 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:26:23.114 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:26:23.114 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@517 -- # '[' -n 391712 ']' 00:26:23.114 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@518 -- # killprocess 391712 00:26:23.114 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # '[' -z 391712 ']' 00:26:23.114 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # kill -0 391712 00:26:23.114 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # uname 00:26:23.114 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:23.114 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 391712 00:26:23.114 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:23.114 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:23.114 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@972 -- # echo 'killing process with pid 391712' 00:26:23.114 killing process with pid 391712 00:26:23.114 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@973 -- # kill 391712 00:26:23.114 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@978 -- # wait 391712 00:26:23.372 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:23.372 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:23.372 00:26:23.372 real 0m32.723s 00:26:23.372 user 1m50.562s 00:26:23.372 sys 0m16.974s 00:26:23.373 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:23.373 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:23.373 ************************************ 00:26:23.373 END TEST nvmf_srq_overwhelm 00:26:23.373 ************************************ 00:26:23.373 19:20:57 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:26:23.373 19:20:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:23.373 19:20:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:23.373 19:20:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:23.373 ************************************ 00:26:23.373 START TEST nvmf_shutdown 00:26:23.373 ************************************ 00:26:23.373 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:26:23.632 * Looking for test storage... 00:26:23.632 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:26:23.632 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:23.632 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:26:23.632 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:23.632 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:23.632 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:23.632 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:23.632 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:23.632 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:23.632 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:23.632 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:23.632 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:23.632 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:23.632 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:23.632 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:23.632 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:23.632 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:23.632 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:26:23.632 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:23.632 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:23.632 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:23.632 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:23.632 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:23.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.633 --rc genhtml_branch_coverage=1 00:26:23.633 --rc genhtml_function_coverage=1 00:26:23.633 --rc genhtml_legend=1 00:26:23.633 --rc geninfo_all_blocks=1 00:26:23.633 --rc geninfo_unexecuted_blocks=1 00:26:23.633 00:26:23.633 ' 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:23.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.633 --rc genhtml_branch_coverage=1 00:26:23.633 --rc genhtml_function_coverage=1 00:26:23.633 --rc genhtml_legend=1 00:26:23.633 --rc geninfo_all_blocks=1 00:26:23.633 --rc geninfo_unexecuted_blocks=1 00:26:23.633 00:26:23.633 ' 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:23.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.633 --rc genhtml_branch_coverage=1 00:26:23.633 --rc genhtml_function_coverage=1 00:26:23.633 --rc genhtml_legend=1 00:26:23.633 --rc geninfo_all_blocks=1 00:26:23.633 --rc geninfo_unexecuted_blocks=1 00:26:23.633 00:26:23.633 ' 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:23.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.633 --rc genhtml_branch_coverage=1 00:26:23.633 --rc genhtml_function_coverage=1 00:26:23.633 --rc genhtml_legend=1 00:26:23.633 --rc geninfo_all_blocks=1 00:26:23.633 --rc geninfo_unexecuted_blocks=1 00:26:23.633 00:26:23.633 ' 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:23.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:23.633 19:20:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:23.633 ************************************ 00:26:23.633 START TEST nvmf_shutdown_tc1 00:26:23.633 ************************************ 00:26:23.633 19:20:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:26:23.633 19:20:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:26:23.633 19:20:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:23.633 19:20:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:23.633 19:20:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:23.633 19:20:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:23.633 19:20:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:23.633 19:20:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:23.633 19:20:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.633 19:20:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:23.633 19:20:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.893 19:20:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:23.893 19:20:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:23.893 19:20:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:23.893 19:20:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:32.017 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:32.017 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:32.017 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:32.017 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # rdma_device_init 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:32.017 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:32.018 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:32.018 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:32.018 altname enp217s0f0np0 00:26:32.018 altname ens818f0np0 00:26:32.018 inet 192.168.100.8/24 scope global mlx_0_0 00:26:32.018 valid_lft forever preferred_lft forever 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:32.018 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:32.018 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:32.018 altname enp217s0f1np1 00:26:32.018 altname ens818f1np1 00:26:32.018 inet 192.168.100.9/24 scope global mlx_0_1 00:26:32.018 valid_lft forever preferred_lft forever 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:32.018 192.168.100.9' 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:32.018 192.168.100.9' 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # head -n 1 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:32.018 192.168.100.9' 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # tail -n +2 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # head -n 1 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=399822 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 399822 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 399822 ']' 00:26:32.018 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:32.019 [2024-12-13 19:21:05.433585] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:26:32.019 [2024-12-13 19:21:05.433647] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.019 [2024-12-13 19:21:05.528821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:32.019 [2024-12-13 19:21:05.551361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.019 [2024-12-13 19:21:05.551401] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.019 [2024-12-13 19:21:05.551411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:32.019 [2024-12-13 19:21:05.551420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:32.019 [2024-12-13 19:21:05.551426] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.019 [2024-12-13 19:21:05.553260] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:32.019 [2024-12-13 19:21:05.553368] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:26:32.019 [2024-12-13 19:21:05.553461] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.019 [2024-12-13 19:21:05.553463] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:32.019 [2024-12-13 19:21:05.719598] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20fe840/0x2102cf0) succeed. 00:26:32.019 [2024-12-13 19:21:05.728993] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20ffe80/0x2144390) succeed. 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.019 19:21:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:32.019 Malloc1 00:26:32.019 [2024-12-13 19:21:05.980453] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:32.019 Malloc2 00:26:32.019 Malloc3 00:26:32.019 Malloc4 00:26:32.019 Malloc5 00:26:32.019 Malloc6 00:26:32.019 Malloc7 00:26:32.019 Malloc8 00:26:32.019 Malloc9 00:26:32.019 Malloc10 00:26:32.019 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.019 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:32.019 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:32.019 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=400059 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 400059 /var/tmp/bdevperf.sock 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 400059 ']' 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:32.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:32.279 { 00:26:32.279 "params": { 00:26:32.279 "name": "Nvme$subsystem", 00:26:32.279 "trtype": "$TEST_TRANSPORT", 00:26:32.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.279 "adrfam": "ipv4", 00:26:32.279 "trsvcid": "$NVMF_PORT", 00:26:32.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.279 "hdgst": ${hdgst:-false}, 00:26:32.279 "ddgst": ${ddgst:-false} 00:26:32.279 }, 00:26:32.279 "method": "bdev_nvme_attach_controller" 00:26:32.279 } 00:26:32.279 EOF 00:26:32.279 )") 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:32.279 { 00:26:32.279 "params": { 00:26:32.279 "name": "Nvme$subsystem", 00:26:32.279 "trtype": "$TEST_TRANSPORT", 00:26:32.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.279 "adrfam": "ipv4", 00:26:32.279 "trsvcid": "$NVMF_PORT", 00:26:32.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.279 "hdgst": ${hdgst:-false}, 00:26:32.279 "ddgst": ${ddgst:-false} 00:26:32.279 }, 00:26:32.279 "method": "bdev_nvme_attach_controller" 00:26:32.279 } 00:26:32.279 EOF 00:26:32.279 )") 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:32.279 { 00:26:32.279 "params": { 00:26:32.279 "name": "Nvme$subsystem", 00:26:32.279 "trtype": "$TEST_TRANSPORT", 00:26:32.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.279 "adrfam": "ipv4", 00:26:32.279 "trsvcid": "$NVMF_PORT", 00:26:32.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.279 "hdgst": ${hdgst:-false}, 00:26:32.279 "ddgst": ${ddgst:-false} 00:26:32.279 }, 00:26:32.279 "method": "bdev_nvme_attach_controller" 00:26:32.279 } 00:26:32.279 EOF 00:26:32.279 )") 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:32.279 { 00:26:32.279 "params": { 00:26:32.279 "name": "Nvme$subsystem", 00:26:32.279 "trtype": "$TEST_TRANSPORT", 00:26:32.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.279 "adrfam": "ipv4", 00:26:32.279 "trsvcid": "$NVMF_PORT", 00:26:32.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.279 "hdgst": ${hdgst:-false}, 00:26:32.279 "ddgst": ${ddgst:-false} 00:26:32.279 }, 00:26:32.279 "method": "bdev_nvme_attach_controller" 00:26:32.279 } 00:26:32.279 EOF 00:26:32.279 )") 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:32.279 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:32.279 { 00:26:32.279 "params": { 00:26:32.279 "name": "Nvme$subsystem", 00:26:32.279 "trtype": "$TEST_TRANSPORT", 00:26:32.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.279 "adrfam": "ipv4", 00:26:32.279 "trsvcid": "$NVMF_PORT", 00:26:32.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.280 "hdgst": ${hdgst:-false}, 00:26:32.280 "ddgst": ${ddgst:-false} 00:26:32.280 }, 00:26:32.280 "method": "bdev_nvme_attach_controller" 00:26:32.280 } 00:26:32.280 EOF 00:26:32.280 )") 00:26:32.280 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:32.280 [2024-12-13 19:21:06.474936] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:26:32.280 [2024-12-13 19:21:06.474993] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:32.280 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:32.280 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:32.280 { 00:26:32.280 "params": { 00:26:32.280 "name": "Nvme$subsystem", 00:26:32.280 "trtype": "$TEST_TRANSPORT", 00:26:32.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.280 "adrfam": "ipv4", 00:26:32.280 "trsvcid": "$NVMF_PORT", 00:26:32.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.280 "hdgst": ${hdgst:-false}, 00:26:32.280 "ddgst": ${ddgst:-false} 00:26:32.280 }, 00:26:32.280 "method": "bdev_nvme_attach_controller" 00:26:32.280 } 00:26:32.280 EOF 00:26:32.280 )") 00:26:32.280 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:32.280 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:32.280 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:32.280 { 00:26:32.280 "params": { 00:26:32.280 "name": "Nvme$subsystem", 00:26:32.280 "trtype": "$TEST_TRANSPORT", 00:26:32.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.280 "adrfam": "ipv4", 00:26:32.280 "trsvcid": "$NVMF_PORT", 00:26:32.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.280 "hdgst": ${hdgst:-false}, 00:26:32.280 "ddgst": ${ddgst:-false} 00:26:32.280 }, 00:26:32.280 "method": "bdev_nvme_attach_controller" 00:26:32.280 } 00:26:32.280 EOF 00:26:32.280 )") 00:26:32.280 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:32.280 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:32.280 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:32.280 { 00:26:32.280 "params": { 00:26:32.280 "name": "Nvme$subsystem", 00:26:32.280 "trtype": "$TEST_TRANSPORT", 00:26:32.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.280 "adrfam": "ipv4", 00:26:32.280 "trsvcid": "$NVMF_PORT", 00:26:32.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.280 "hdgst": ${hdgst:-false}, 00:26:32.280 "ddgst": ${ddgst:-false} 00:26:32.280 }, 00:26:32.280 "method": "bdev_nvme_attach_controller" 00:26:32.280 } 00:26:32.280 EOF 00:26:32.280 )") 00:26:32.280 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:32.280 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:32.280 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:32.280 { 00:26:32.280 "params": { 00:26:32.280 "name": "Nvme$subsystem", 00:26:32.280 "trtype": "$TEST_TRANSPORT", 00:26:32.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.280 "adrfam": "ipv4", 00:26:32.280 "trsvcid": "$NVMF_PORT", 00:26:32.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.280 "hdgst": ${hdgst:-false}, 00:26:32.280 "ddgst": ${ddgst:-false} 00:26:32.280 }, 00:26:32.280 "method": "bdev_nvme_attach_controller" 00:26:32.280 } 00:26:32.280 EOF 00:26:32.280 )") 00:26:32.280 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:32.280 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:32.280 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:32.280 { 00:26:32.280 "params": { 00:26:32.280 "name": "Nvme$subsystem", 00:26:32.280 "trtype": "$TEST_TRANSPORT", 00:26:32.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.280 "adrfam": "ipv4", 00:26:32.280 "trsvcid": "$NVMF_PORT", 00:26:32.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.280 "hdgst": ${hdgst:-false}, 00:26:32.280 "ddgst": ${ddgst:-false} 00:26:32.280 }, 00:26:32.280 "method": "bdev_nvme_attach_controller" 00:26:32.280 } 00:26:32.280 EOF 00:26:32.280 )") 00:26:32.280 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:32.280 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:26:32.280 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:26:32.280 19:21:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:32.280 "params": { 00:26:32.280 "name": "Nvme1", 00:26:32.280 "trtype": "rdma", 00:26:32.280 "traddr": "192.168.100.8", 00:26:32.280 "adrfam": "ipv4", 00:26:32.280 "trsvcid": "4420", 00:26:32.280 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:32.280 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:32.280 "hdgst": false, 00:26:32.280 "ddgst": false 00:26:32.280 }, 00:26:32.280 "method": "bdev_nvme_attach_controller" 00:26:32.280 },{ 00:26:32.280 "params": { 00:26:32.280 "name": "Nvme2", 00:26:32.280 "trtype": "rdma", 00:26:32.280 "traddr": "192.168.100.8", 00:26:32.280 "adrfam": "ipv4", 00:26:32.280 "trsvcid": "4420", 00:26:32.280 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:32.280 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:32.280 "hdgst": false, 00:26:32.280 "ddgst": false 00:26:32.280 }, 00:26:32.280 "method": "bdev_nvme_attach_controller" 00:26:32.280 },{ 00:26:32.280 "params": { 00:26:32.280 "name": "Nvme3", 00:26:32.280 "trtype": "rdma", 00:26:32.280 "traddr": "192.168.100.8", 00:26:32.280 "adrfam": "ipv4", 00:26:32.280 "trsvcid": "4420", 00:26:32.280 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:32.280 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:32.280 "hdgst": false, 00:26:32.280 "ddgst": false 00:26:32.280 }, 00:26:32.280 "method": "bdev_nvme_attach_controller" 00:26:32.280 },{ 00:26:32.280 "params": { 00:26:32.280 "name": "Nvme4", 00:26:32.280 "trtype": "rdma", 00:26:32.280 "traddr": "192.168.100.8", 00:26:32.280 "adrfam": "ipv4", 00:26:32.280 "trsvcid": "4420", 00:26:32.280 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:32.280 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:32.280 "hdgst": false, 00:26:32.280 "ddgst": false 00:26:32.280 }, 00:26:32.280 "method": "bdev_nvme_attach_controller" 00:26:32.280 },{ 00:26:32.280 "params": { 00:26:32.280 "name": "Nvme5", 00:26:32.280 "trtype": "rdma", 00:26:32.280 "traddr": "192.168.100.8", 00:26:32.280 "adrfam": "ipv4", 00:26:32.280 "trsvcid": "4420", 00:26:32.280 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:32.280 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:32.280 "hdgst": false, 00:26:32.280 "ddgst": false 00:26:32.280 }, 00:26:32.280 "method": "bdev_nvme_attach_controller" 00:26:32.280 },{ 00:26:32.280 "params": { 00:26:32.280 "name": "Nvme6", 00:26:32.280 "trtype": "rdma", 00:26:32.280 "traddr": "192.168.100.8", 00:26:32.280 "adrfam": "ipv4", 00:26:32.280 "trsvcid": "4420", 00:26:32.280 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:32.280 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:32.280 "hdgst": false, 00:26:32.280 "ddgst": false 00:26:32.280 }, 00:26:32.280 "method": "bdev_nvme_attach_controller" 00:26:32.280 },{ 00:26:32.280 "params": { 00:26:32.280 "name": "Nvme7", 00:26:32.280 "trtype": "rdma", 00:26:32.280 "traddr": "192.168.100.8", 00:26:32.280 "adrfam": "ipv4", 00:26:32.280 "trsvcid": "4420", 00:26:32.280 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:32.280 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:32.280 "hdgst": false, 00:26:32.280 "ddgst": false 00:26:32.280 }, 00:26:32.280 "method": "bdev_nvme_attach_controller" 00:26:32.280 },{ 00:26:32.280 "params": { 00:26:32.280 "name": "Nvme8", 00:26:32.280 "trtype": "rdma", 00:26:32.280 "traddr": "192.168.100.8", 00:26:32.280 "adrfam": "ipv4", 00:26:32.280 "trsvcid": "4420", 00:26:32.280 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:32.280 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:32.280 "hdgst": false, 00:26:32.280 "ddgst": false 00:26:32.280 }, 00:26:32.280 "method": "bdev_nvme_attach_controller" 00:26:32.280 },{ 00:26:32.280 "params": { 00:26:32.280 "name": "Nvme9", 00:26:32.280 "trtype": "rdma", 00:26:32.280 "traddr": "192.168.100.8", 00:26:32.280 "adrfam": "ipv4", 00:26:32.280 "trsvcid": "4420", 00:26:32.280 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:32.280 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:32.280 "hdgst": false, 00:26:32.280 "ddgst": false 00:26:32.280 }, 00:26:32.280 "method": "bdev_nvme_attach_controller" 00:26:32.280 },{ 00:26:32.280 "params": { 00:26:32.280 "name": "Nvme10", 00:26:32.280 "trtype": "rdma", 00:26:32.280 "traddr": "192.168.100.8", 00:26:32.280 "adrfam": "ipv4", 00:26:32.280 "trsvcid": "4420", 00:26:32.280 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:32.280 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:32.280 "hdgst": false, 00:26:32.281 "ddgst": false 00:26:32.281 }, 00:26:32.281 "method": "bdev_nvme_attach_controller" 00:26:32.281 }' 00:26:32.281 [2024-12-13 19:21:06.571036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.281 [2024-12-13 19:21:06.593434] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.219 19:21:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:33.219 19:21:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:26:33.219 19:21:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:33.219 19:21:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.219 19:21:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:33.219 19:21:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.219 19:21:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 400059 00:26:33.219 19:21:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:26:33.219 19:21:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:26:34.157 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 400059 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:34.157 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 399822 00:26:34.157 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:34.157 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:34.157 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:26:34.157 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:26:34.157 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:34.157 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:34.157 { 00:26:34.157 "params": { 00:26:34.157 "name": "Nvme$subsystem", 00:26:34.157 "trtype": "$TEST_TRANSPORT", 00:26:34.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.157 "adrfam": "ipv4", 00:26:34.157 "trsvcid": "$NVMF_PORT", 00:26:34.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.157 "hdgst": ${hdgst:-false}, 00:26:34.157 "ddgst": ${ddgst:-false} 00:26:34.157 }, 00:26:34.157 "method": "bdev_nvme_attach_controller" 00:26:34.157 } 00:26:34.157 EOF 00:26:34.157 )") 00:26:34.157 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:34.157 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:34.157 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:34.157 { 00:26:34.157 "params": { 00:26:34.157 "name": "Nvme$subsystem", 00:26:34.157 "trtype": "$TEST_TRANSPORT", 00:26:34.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.157 "adrfam": "ipv4", 00:26:34.157 "trsvcid": "$NVMF_PORT", 00:26:34.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.157 "hdgst": ${hdgst:-false}, 00:26:34.157 "ddgst": ${ddgst:-false} 00:26:34.157 }, 00:26:34.157 "method": "bdev_nvme_attach_controller" 00:26:34.157 } 00:26:34.157 EOF 00:26:34.157 )") 00:26:34.157 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:34.157 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:34.157 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:34.157 { 00:26:34.157 "params": { 00:26:34.157 "name": "Nvme$subsystem", 00:26:34.157 "trtype": "$TEST_TRANSPORT", 00:26:34.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.157 "adrfam": "ipv4", 00:26:34.157 "trsvcid": "$NVMF_PORT", 00:26:34.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.157 "hdgst": ${hdgst:-false}, 00:26:34.157 "ddgst": ${ddgst:-false} 00:26:34.157 }, 00:26:34.157 "method": "bdev_nvme_attach_controller" 00:26:34.157 } 00:26:34.157 EOF 00:26:34.157 )") 00:26:34.157 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:34.157 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:34.157 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:34.157 { 00:26:34.157 "params": { 00:26:34.157 "name": "Nvme$subsystem", 00:26:34.157 "trtype": "$TEST_TRANSPORT", 00:26:34.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.157 "adrfam": "ipv4", 00:26:34.157 "trsvcid": "$NVMF_PORT", 00:26:34.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.157 "hdgst": ${hdgst:-false}, 00:26:34.157 "ddgst": ${ddgst:-false} 00:26:34.157 }, 00:26:34.157 "method": "bdev_nvme_attach_controller" 00:26:34.157 } 00:26:34.157 EOF 00:26:34.157 )") 00:26:34.157 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:34.157 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:34.157 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:34.157 { 00:26:34.158 "params": { 00:26:34.158 "name": "Nvme$subsystem", 00:26:34.158 "trtype": "$TEST_TRANSPORT", 00:26:34.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.158 "adrfam": "ipv4", 00:26:34.158 "trsvcid": "$NVMF_PORT", 00:26:34.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.158 "hdgst": ${hdgst:-false}, 00:26:34.158 "ddgst": ${ddgst:-false} 00:26:34.158 }, 00:26:34.158 "method": "bdev_nvme_attach_controller" 00:26:34.158 } 00:26:34.158 EOF 00:26:34.158 )") 00:26:34.158 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:34.158 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:34.158 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:34.158 { 00:26:34.158 "params": { 00:26:34.158 "name": "Nvme$subsystem", 00:26:34.158 "trtype": "$TEST_TRANSPORT", 00:26:34.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.158 "adrfam": "ipv4", 00:26:34.158 "trsvcid": "$NVMF_PORT", 00:26:34.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.158 "hdgst": ${hdgst:-false}, 00:26:34.158 "ddgst": ${ddgst:-false} 00:26:34.158 }, 00:26:34.158 "method": "bdev_nvme_attach_controller" 00:26:34.158 } 00:26:34.158 EOF 00:26:34.158 )") 00:26:34.158 [2024-12-13 19:21:08.510762] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:26:34.158 [2024-12-13 19:21:08.510813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid400456 ] 00:26:34.158 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:34.158 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:34.158 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:34.158 { 00:26:34.158 "params": { 00:26:34.158 "name": "Nvme$subsystem", 00:26:34.158 "trtype": "$TEST_TRANSPORT", 00:26:34.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.158 "adrfam": "ipv4", 00:26:34.158 "trsvcid": "$NVMF_PORT", 00:26:34.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.158 "hdgst": ${hdgst:-false}, 00:26:34.158 "ddgst": ${ddgst:-false} 00:26:34.158 }, 00:26:34.158 "method": "bdev_nvme_attach_controller" 00:26:34.158 } 00:26:34.158 EOF 00:26:34.158 )") 00:26:34.158 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:34.158 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:34.158 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:34.158 { 00:26:34.158 "params": { 00:26:34.158 "name": "Nvme$subsystem", 00:26:34.158 "trtype": "$TEST_TRANSPORT", 00:26:34.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.158 "adrfam": "ipv4", 00:26:34.158 "trsvcid": "$NVMF_PORT", 00:26:34.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.158 "hdgst": ${hdgst:-false}, 00:26:34.158 "ddgst": ${ddgst:-false} 00:26:34.158 }, 00:26:34.158 "method": "bdev_nvme_attach_controller" 00:26:34.158 } 00:26:34.158 EOF 00:26:34.158 )") 00:26:34.158 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:34.417 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:34.417 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:34.417 { 00:26:34.417 "params": { 00:26:34.417 "name": "Nvme$subsystem", 00:26:34.417 "trtype": "$TEST_TRANSPORT", 00:26:34.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.417 "adrfam": "ipv4", 00:26:34.417 "trsvcid": "$NVMF_PORT", 00:26:34.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.417 "hdgst": ${hdgst:-false}, 00:26:34.417 "ddgst": ${ddgst:-false} 00:26:34.417 }, 00:26:34.417 "method": "bdev_nvme_attach_controller" 00:26:34.417 } 00:26:34.417 EOF 00:26:34.417 )") 00:26:34.417 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:34.417 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:34.417 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:34.417 { 00:26:34.417 "params": { 00:26:34.417 "name": "Nvme$subsystem", 00:26:34.417 "trtype": "$TEST_TRANSPORT", 00:26:34.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.417 "adrfam": "ipv4", 00:26:34.417 "trsvcid": "$NVMF_PORT", 00:26:34.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.417 "hdgst": ${hdgst:-false}, 00:26:34.417 "ddgst": ${ddgst:-false} 00:26:34.417 }, 00:26:34.417 "method": "bdev_nvme_attach_controller" 00:26:34.417 } 00:26:34.417 EOF 00:26:34.417 )") 00:26:34.417 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:34.417 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:26:34.417 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:26:34.417 19:21:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:34.417 "params": { 00:26:34.417 "name": "Nvme1", 00:26:34.417 "trtype": "rdma", 00:26:34.417 "traddr": "192.168.100.8", 00:26:34.417 "adrfam": "ipv4", 00:26:34.417 "trsvcid": "4420", 00:26:34.417 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:34.417 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:34.417 "hdgst": false, 00:26:34.417 "ddgst": false 00:26:34.417 }, 00:26:34.417 "method": "bdev_nvme_attach_controller" 00:26:34.417 },{ 00:26:34.417 "params": { 00:26:34.417 "name": "Nvme2", 00:26:34.417 "trtype": "rdma", 00:26:34.417 "traddr": "192.168.100.8", 00:26:34.417 "adrfam": "ipv4", 00:26:34.417 "trsvcid": "4420", 00:26:34.417 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:34.417 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:34.417 "hdgst": false, 00:26:34.417 "ddgst": false 00:26:34.417 }, 00:26:34.417 "method": "bdev_nvme_attach_controller" 00:26:34.417 },{ 00:26:34.417 "params": { 00:26:34.417 "name": "Nvme3", 00:26:34.417 "trtype": "rdma", 00:26:34.417 "traddr": "192.168.100.8", 00:26:34.417 "adrfam": "ipv4", 00:26:34.417 "trsvcid": "4420", 00:26:34.417 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:34.417 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:34.417 "hdgst": false, 00:26:34.417 "ddgst": false 00:26:34.417 }, 00:26:34.417 "method": "bdev_nvme_attach_controller" 00:26:34.417 },{ 00:26:34.417 "params": { 00:26:34.418 "name": "Nvme4", 00:26:34.418 "trtype": "rdma", 00:26:34.418 "traddr": "192.168.100.8", 00:26:34.418 "adrfam": "ipv4", 00:26:34.418 "trsvcid": "4420", 00:26:34.418 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:34.418 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:34.418 "hdgst": false, 00:26:34.418 "ddgst": false 00:26:34.418 }, 00:26:34.418 "method": "bdev_nvme_attach_controller" 00:26:34.418 },{ 00:26:34.418 "params": { 00:26:34.418 "name": "Nvme5", 00:26:34.418 "trtype": "rdma", 00:26:34.418 "traddr": "192.168.100.8", 00:26:34.418 "adrfam": "ipv4", 00:26:34.418 "trsvcid": "4420", 00:26:34.418 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:34.418 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:34.418 "hdgst": false, 00:26:34.418 "ddgst": false 00:26:34.418 }, 00:26:34.418 "method": "bdev_nvme_attach_controller" 00:26:34.418 },{ 00:26:34.418 "params": { 00:26:34.418 "name": "Nvme6", 00:26:34.418 "trtype": "rdma", 00:26:34.418 "traddr": "192.168.100.8", 00:26:34.418 "adrfam": "ipv4", 00:26:34.418 "trsvcid": "4420", 00:26:34.418 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:34.418 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:34.418 "hdgst": false, 00:26:34.418 "ddgst": false 00:26:34.418 }, 00:26:34.418 "method": "bdev_nvme_attach_controller" 00:26:34.418 },{ 00:26:34.418 "params": { 00:26:34.418 "name": "Nvme7", 00:26:34.418 "trtype": "rdma", 00:26:34.418 "traddr": "192.168.100.8", 00:26:34.418 "adrfam": "ipv4", 00:26:34.418 "trsvcid": "4420", 00:26:34.418 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:34.418 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:34.418 "hdgst": false, 00:26:34.418 "ddgst": false 00:26:34.418 }, 00:26:34.418 "method": "bdev_nvme_attach_controller" 00:26:34.418 },{ 00:26:34.418 "params": { 00:26:34.418 "name": "Nvme8", 00:26:34.418 "trtype": "rdma", 00:26:34.418 "traddr": "192.168.100.8", 00:26:34.418 "adrfam": "ipv4", 00:26:34.418 "trsvcid": "4420", 00:26:34.418 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:34.418 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:34.418 "hdgst": false, 00:26:34.418 "ddgst": false 00:26:34.418 }, 00:26:34.418 "method": "bdev_nvme_attach_controller" 00:26:34.418 },{ 00:26:34.418 "params": { 00:26:34.418 "name": "Nvme9", 00:26:34.418 "trtype": "rdma", 00:26:34.418 "traddr": "192.168.100.8", 00:26:34.418 "adrfam": "ipv4", 00:26:34.418 "trsvcid": "4420", 00:26:34.418 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:34.418 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:34.418 "hdgst": false, 00:26:34.418 "ddgst": false 00:26:34.418 }, 00:26:34.418 "method": "bdev_nvme_attach_controller" 00:26:34.418 },{ 00:26:34.418 "params": { 00:26:34.418 "name": "Nvme10", 00:26:34.418 "trtype": "rdma", 00:26:34.418 "traddr": "192.168.100.8", 00:26:34.418 "adrfam": "ipv4", 00:26:34.418 "trsvcid": "4420", 00:26:34.418 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:34.418 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:34.418 "hdgst": false, 00:26:34.418 "ddgst": false 00:26:34.418 }, 00:26:34.418 "method": "bdev_nvme_attach_controller" 00:26:34.418 }' 00:26:34.418 [2024-12-13 19:21:08.603779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.418 [2024-12-13 19:21:08.626140] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.356 Running I/O for 1 seconds... 00:26:36.551 3230.00 IOPS, 201.88 MiB/s 00:26:36.551 Latency(us) 00:26:36.551 [2024-12-13T18:21:10.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.551 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.551 Verification LBA range: start 0x0 length 0x400 00:26:36.551 Nvme1n1 : 1.18 341.06 21.32 0.00 0.00 179776.27 31247.56 208876.34 00:26:36.551 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.551 Verification LBA range: start 0x0 length 0x400 00:26:36.551 Nvme2n1 : 1.19 376.13 23.51 0.00 0.00 164014.72 10643.05 185388.24 00:26:36.551 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.551 Verification LBA range: start 0x0 length 0x400 00:26:36.551 Nvme3n1 : 1.19 398.44 24.90 0.00 0.00 152312.51 4482.66 140928.61 00:26:36.551 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.551 Verification LBA range: start 0x0 length 0x400 00:26:36.551 Nvme4n1 : 1.19 398.08 24.88 0.00 0.00 150443.89 11010.05 133378.87 00:26:36.551 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.551 Verification LBA range: start 0x0 length 0x400 00:26:36.551 Nvme5n1 : 1.19 386.90 24.18 0.00 0.00 152460.00 10957.62 125829.12 00:26:36.551 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.551 Verification LBA range: start 0x0 length 0x400 00:26:36.551 Nvme6n1 : 1.18 378.32 23.65 0.00 0.00 154913.29 14365.49 114085.07 00:26:36.551 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.551 Verification LBA range: start 0x0 length 0x400 00:26:36.551 Nvme7n1 : 1.20 392.44 24.53 0.00 0.00 146527.43 9175.04 104438.17 00:26:36.551 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.551 Verification LBA range: start 0x0 length 0x400 00:26:36.551 Nvme8n1 : 1.20 378.83 23.68 0.00 0.00 149378.16 8912.90 95210.70 00:26:36.551 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.551 Verification LBA range: start 0x0 length 0x400 00:26:36.551 Nvme9n1 : 1.19 377.15 23.57 0.00 0.00 148929.16 9909.04 109051.90 00:26:36.551 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:36.551 Verification LBA range: start 0x0 length 0x400 00:26:36.551 Nvme10n1 : 1.19 322.81 20.18 0.00 0.00 171341.28 10852.76 203004.31 00:26:36.551 [2024-12-13T18:21:10.929Z] =================================================================================================================== 00:26:36.551 [2024-12-13T18:21:10.929Z] Total : 3750.17 234.39 0.00 0.00 156429.35 4482.66 208876.34 00:26:36.811 19:21:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:36.811 rmmod nvme_rdma 00:26:36.811 rmmod nvme_fabrics 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 399822 ']' 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 399822 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 399822 ']' 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 399822 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 399822 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 399822' 00:26:36.811 killing process with pid 399822 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 399822 00:26:36.811 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 399822 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:37.381 00:26:37.381 real 0m13.573s 00:26:37.381 user 0m28.708s 00:26:37.381 sys 0m6.697s 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:37.381 ************************************ 00:26:37.381 END TEST nvmf_shutdown_tc1 00:26:37.381 ************************************ 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:37.381 ************************************ 00:26:37.381 START TEST nvmf_shutdown_tc2 00:26:37.381 ************************************ 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:37.381 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:37.382 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:37.382 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:37.382 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:37.382 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # rdma_device_init 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:37.382 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:37.642 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:37.642 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:37.642 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:37.642 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:37.642 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:37.642 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:37.642 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:37.642 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:37.642 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:37.642 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:37.642 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:37.642 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:37.642 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:26:37.642 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:37.642 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:37.642 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:37.642 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:37.642 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:37.642 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:37.642 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:26:37.642 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:37.642 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:37.642 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:37.642 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:37.642 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:37.643 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:37.643 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:37.643 altname enp217s0f0np0 00:26:37.643 altname ens818f0np0 00:26:37.643 inet 192.168.100.8/24 scope global mlx_0_0 00:26:37.643 valid_lft forever preferred_lft forever 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:37.643 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:37.643 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:37.643 altname enp217s0f1np1 00:26:37.643 altname ens818f1np1 00:26:37.643 inet 192.168.100.9/24 scope global mlx_0_1 00:26:37.643 valid_lft forever preferred_lft forever 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:37.643 192.168.100.9' 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:37.643 192.168.100.9' 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # head -n 1 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:37.643 192.168.100.9' 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # tail -n +2 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # head -n 1 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=401206 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 401206 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 401206 ']' 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:37.643 19:21:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:37.643 [2024-12-13 19:21:12.008824] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:26:37.643 [2024-12-13 19:21:12.008881] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:37.903 [2024-12-13 19:21:12.109183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:37.903 [2024-12-13 19:21:12.132230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:37.903 [2024-12-13 19:21:12.132265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:37.903 [2024-12-13 19:21:12.132275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:37.903 [2024-12-13 19:21:12.132283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:37.903 [2024-12-13 19:21:12.132290] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:37.903 [2024-12-13 19:21:12.134097] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:37.903 [2024-12-13 19:21:12.134207] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:26:37.903 [2024-12-13 19:21:12.134222] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:26:37.903 [2024-12-13 19:21:12.134227] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.903 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:37.903 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:37.903 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:37.903 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:37.903 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:38.162 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:38.162 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:38.162 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.162 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:38.162 [2024-12-13 19:21:12.308817] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x108f840/0x1093cf0) succeed. 00:26:38.162 [2024-12-13 19:21:12.317998] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1090e80/0x10d5390) succeed. 00:26:38.162 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.162 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.163 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:38.163 Malloc1 00:26:38.422 [2024-12-13 19:21:12.558836] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:38.422 Malloc2 00:26:38.422 Malloc3 00:26:38.422 Malloc4 00:26:38.422 Malloc5 00:26:38.422 Malloc6 00:26:38.682 Malloc7 00:26:38.682 Malloc8 00:26:38.682 Malloc9 00:26:38.682 Malloc10 00:26:38.682 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.682 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:38.682 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:38.682 19:21:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:38.682 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=401305 00:26:38.682 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 401305 /var/tmp/bdevperf.sock 00:26:38.682 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 401305 ']' 00:26:38.682 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:38.682 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:38.682 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:38.682 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:38.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:38.682 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:38.682 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:38.682 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:38.682 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:26:38.682 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:26:38.682 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:38.682 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:38.682 { 00:26:38.682 "params": { 00:26:38.682 "name": "Nvme$subsystem", 00:26:38.682 "trtype": "$TEST_TRANSPORT", 00:26:38.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.682 "adrfam": "ipv4", 00:26:38.682 "trsvcid": "$NVMF_PORT", 00:26:38.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.682 "hdgst": ${hdgst:-false}, 00:26:38.682 "ddgst": ${ddgst:-false} 00:26:38.682 }, 00:26:38.682 "method": "bdev_nvme_attach_controller" 00:26:38.682 } 00:26:38.682 EOF 00:26:38.682 )") 00:26:38.682 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:38.682 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:38.682 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:38.682 { 00:26:38.682 "params": { 00:26:38.682 "name": "Nvme$subsystem", 00:26:38.682 "trtype": "$TEST_TRANSPORT", 00:26:38.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.682 "adrfam": "ipv4", 00:26:38.682 "trsvcid": "$NVMF_PORT", 00:26:38.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.682 "hdgst": ${hdgst:-false}, 00:26:38.682 "ddgst": ${ddgst:-false} 00:26:38.682 }, 00:26:38.682 "method": "bdev_nvme_attach_controller" 00:26:38.682 } 00:26:38.682 EOF 00:26:38.682 )") 00:26:38.682 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:38.682 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:38.682 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:38.682 { 00:26:38.682 "params": { 00:26:38.682 "name": "Nvme$subsystem", 00:26:38.682 "trtype": "$TEST_TRANSPORT", 00:26:38.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.682 "adrfam": "ipv4", 00:26:38.682 "trsvcid": "$NVMF_PORT", 00:26:38.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.682 "hdgst": ${hdgst:-false}, 00:26:38.682 "ddgst": ${ddgst:-false} 00:26:38.682 }, 00:26:38.682 "method": "bdev_nvme_attach_controller" 00:26:38.682 } 00:26:38.682 EOF 00:26:38.682 )") 00:26:38.682 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:38.682 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:38.682 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:38.682 { 00:26:38.682 "params": { 00:26:38.682 "name": "Nvme$subsystem", 00:26:38.682 "trtype": "$TEST_TRANSPORT", 00:26:38.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.682 "adrfam": "ipv4", 00:26:38.682 "trsvcid": "$NVMF_PORT", 00:26:38.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.683 "hdgst": ${hdgst:-false}, 00:26:38.683 "ddgst": ${ddgst:-false} 00:26:38.683 }, 00:26:38.683 "method": "bdev_nvme_attach_controller" 00:26:38.683 } 00:26:38.683 EOF 00:26:38.683 )") 00:26:38.683 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:38.683 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:38.683 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:38.683 { 00:26:38.683 "params": { 00:26:38.683 "name": "Nvme$subsystem", 00:26:38.683 "trtype": "$TEST_TRANSPORT", 00:26:38.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.683 "adrfam": "ipv4", 00:26:38.683 "trsvcid": "$NVMF_PORT", 00:26:38.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.683 "hdgst": ${hdgst:-false}, 00:26:38.683 "ddgst": ${ddgst:-false} 00:26:38.683 }, 00:26:38.683 "method": "bdev_nvme_attach_controller" 00:26:38.683 } 00:26:38.683 EOF 00:26:38.683 )") 00:26:38.683 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:38.683 [2024-12-13 19:21:13.049610] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:26:38.683 [2024-12-13 19:21:13.049663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid401305 ] 00:26:38.683 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:38.683 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:38.683 { 00:26:38.683 "params": { 00:26:38.683 "name": "Nvme$subsystem", 00:26:38.683 "trtype": "$TEST_TRANSPORT", 00:26:38.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.683 "adrfam": "ipv4", 00:26:38.683 "trsvcid": "$NVMF_PORT", 00:26:38.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.683 "hdgst": ${hdgst:-false}, 00:26:38.683 "ddgst": ${ddgst:-false} 00:26:38.683 }, 00:26:38.683 "method": "bdev_nvme_attach_controller" 00:26:38.683 } 00:26:38.683 EOF 00:26:38.683 )") 00:26:38.683 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:38.683 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:38.943 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:38.943 { 00:26:38.943 "params": { 00:26:38.943 "name": "Nvme$subsystem", 00:26:38.943 "trtype": "$TEST_TRANSPORT", 00:26:38.943 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.943 "adrfam": "ipv4", 00:26:38.943 "trsvcid": "$NVMF_PORT", 00:26:38.943 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.943 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.943 "hdgst": ${hdgst:-false}, 00:26:38.943 "ddgst": ${ddgst:-false} 00:26:38.943 }, 00:26:38.943 "method": "bdev_nvme_attach_controller" 00:26:38.943 } 00:26:38.943 EOF 00:26:38.943 )") 00:26:38.943 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:38.943 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:38.943 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:38.943 { 00:26:38.943 "params": { 00:26:38.943 "name": "Nvme$subsystem", 00:26:38.943 "trtype": "$TEST_TRANSPORT", 00:26:38.943 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.943 "adrfam": "ipv4", 00:26:38.943 "trsvcid": "$NVMF_PORT", 00:26:38.943 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.943 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.943 "hdgst": ${hdgst:-false}, 00:26:38.943 "ddgst": ${ddgst:-false} 00:26:38.943 }, 00:26:38.943 "method": "bdev_nvme_attach_controller" 00:26:38.943 } 00:26:38.943 EOF 00:26:38.943 )") 00:26:38.943 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:38.943 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:38.943 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:38.943 { 00:26:38.943 "params": { 00:26:38.943 "name": "Nvme$subsystem", 00:26:38.943 "trtype": "$TEST_TRANSPORT", 00:26:38.943 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.943 "adrfam": "ipv4", 00:26:38.943 "trsvcid": "$NVMF_PORT", 00:26:38.943 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.943 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.943 "hdgst": ${hdgst:-false}, 00:26:38.943 "ddgst": ${ddgst:-false} 00:26:38.943 }, 00:26:38.943 "method": "bdev_nvme_attach_controller" 00:26:38.943 } 00:26:38.943 EOF 00:26:38.943 )") 00:26:38.943 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:38.943 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:38.943 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:38.943 { 00:26:38.943 "params": { 00:26:38.943 "name": "Nvme$subsystem", 00:26:38.943 "trtype": "$TEST_TRANSPORT", 00:26:38.943 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.943 "adrfam": "ipv4", 00:26:38.943 "trsvcid": "$NVMF_PORT", 00:26:38.943 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.943 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.943 "hdgst": ${hdgst:-false}, 00:26:38.943 "ddgst": ${ddgst:-false} 00:26:38.943 }, 00:26:38.943 "method": "bdev_nvme_attach_controller" 00:26:38.943 } 00:26:38.943 EOF 00:26:38.943 )") 00:26:38.943 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:38.943 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:26:38.943 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:26:38.943 19:21:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:38.943 "params": { 00:26:38.943 "name": "Nvme1", 00:26:38.943 "trtype": "rdma", 00:26:38.943 "traddr": "192.168.100.8", 00:26:38.943 "adrfam": "ipv4", 00:26:38.943 "trsvcid": "4420", 00:26:38.943 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:38.943 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:38.943 "hdgst": false, 00:26:38.943 "ddgst": false 00:26:38.943 }, 00:26:38.943 "method": "bdev_nvme_attach_controller" 00:26:38.943 },{ 00:26:38.943 "params": { 00:26:38.943 "name": "Nvme2", 00:26:38.943 "trtype": "rdma", 00:26:38.943 "traddr": "192.168.100.8", 00:26:38.943 "adrfam": "ipv4", 00:26:38.943 "trsvcid": "4420", 00:26:38.943 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:38.943 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:38.943 "hdgst": false, 00:26:38.943 "ddgst": false 00:26:38.943 }, 00:26:38.943 "method": "bdev_nvme_attach_controller" 00:26:38.943 },{ 00:26:38.943 "params": { 00:26:38.943 "name": "Nvme3", 00:26:38.943 "trtype": "rdma", 00:26:38.943 "traddr": "192.168.100.8", 00:26:38.943 "adrfam": "ipv4", 00:26:38.943 "trsvcid": "4420", 00:26:38.943 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:38.943 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:38.943 "hdgst": false, 00:26:38.943 "ddgst": false 00:26:38.943 }, 00:26:38.943 "method": "bdev_nvme_attach_controller" 00:26:38.943 },{ 00:26:38.943 "params": { 00:26:38.943 "name": "Nvme4", 00:26:38.943 "trtype": "rdma", 00:26:38.943 "traddr": "192.168.100.8", 00:26:38.943 "adrfam": "ipv4", 00:26:38.943 "trsvcid": "4420", 00:26:38.943 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:38.943 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:38.943 "hdgst": false, 00:26:38.943 "ddgst": false 00:26:38.943 }, 00:26:38.943 "method": "bdev_nvme_attach_controller" 00:26:38.943 },{ 00:26:38.943 "params": { 00:26:38.943 "name": "Nvme5", 00:26:38.943 "trtype": "rdma", 00:26:38.943 "traddr": "192.168.100.8", 00:26:38.943 "adrfam": "ipv4", 00:26:38.943 "trsvcid": "4420", 00:26:38.943 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:38.943 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:38.943 "hdgst": false, 00:26:38.943 "ddgst": false 00:26:38.943 }, 00:26:38.943 "method": "bdev_nvme_attach_controller" 00:26:38.943 },{ 00:26:38.943 "params": { 00:26:38.943 "name": "Nvme6", 00:26:38.943 "trtype": "rdma", 00:26:38.943 "traddr": "192.168.100.8", 00:26:38.943 "adrfam": "ipv4", 00:26:38.943 "trsvcid": "4420", 00:26:38.943 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:38.943 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:38.943 "hdgst": false, 00:26:38.943 "ddgst": false 00:26:38.943 }, 00:26:38.943 "method": "bdev_nvme_attach_controller" 00:26:38.943 },{ 00:26:38.943 "params": { 00:26:38.943 "name": "Nvme7", 00:26:38.943 "trtype": "rdma", 00:26:38.943 "traddr": "192.168.100.8", 00:26:38.943 "adrfam": "ipv4", 00:26:38.943 "trsvcid": "4420", 00:26:38.943 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:38.943 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:38.943 "hdgst": false, 00:26:38.943 "ddgst": false 00:26:38.943 }, 00:26:38.943 "method": "bdev_nvme_attach_controller" 00:26:38.943 },{ 00:26:38.943 "params": { 00:26:38.943 "name": "Nvme8", 00:26:38.943 "trtype": "rdma", 00:26:38.943 "traddr": "192.168.100.8", 00:26:38.943 "adrfam": "ipv4", 00:26:38.943 "trsvcid": "4420", 00:26:38.943 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:38.943 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:38.943 "hdgst": false, 00:26:38.943 "ddgst": false 00:26:38.943 }, 00:26:38.943 "method": "bdev_nvme_attach_controller" 00:26:38.943 },{ 00:26:38.943 "params": { 00:26:38.943 "name": "Nvme9", 00:26:38.943 "trtype": "rdma", 00:26:38.943 "traddr": "192.168.100.8", 00:26:38.943 "adrfam": "ipv4", 00:26:38.943 "trsvcid": "4420", 00:26:38.943 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:38.943 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:38.943 "hdgst": false, 00:26:38.943 "ddgst": false 00:26:38.943 }, 00:26:38.943 "method": "bdev_nvme_attach_controller" 00:26:38.943 },{ 00:26:38.943 "params": { 00:26:38.943 "name": "Nvme10", 00:26:38.943 "trtype": "rdma", 00:26:38.943 "traddr": "192.168.100.8", 00:26:38.943 "adrfam": "ipv4", 00:26:38.943 "trsvcid": "4420", 00:26:38.943 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:38.943 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:38.943 "hdgst": false, 00:26:38.944 "ddgst": false 00:26:38.944 }, 00:26:38.944 "method": "bdev_nvme_attach_controller" 00:26:38.944 }' 00:26:38.944 [2024-12-13 19:21:13.144602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.944 [2024-12-13 19:21:13.167824] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.881 Running I/O for 10 seconds... 00:26:39.881 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:39.881 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:39.881 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:39.881 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.881 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:39.881 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.881 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:39.881 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:39.881 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:26:39.881 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:26:39.881 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:26:39.881 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:26:39.881 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:39.881 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:39.881 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:39.881 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.881 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.141 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.141 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=19 00:26:40.141 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 19 -ge 100 ']' 00:26:40.141 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:40.400 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:40.400 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:40.400 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:40.400 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:40.400 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.400 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.400 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.400 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=177 00:26:40.400 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 177 -ge 100 ']' 00:26:40.400 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:26:40.400 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:26:40.400 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:26:40.400 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 401305 00:26:40.400 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 401305 ']' 00:26:40.400 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 401305 00:26:40.400 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:26:40.400 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:40.400 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 401305 00:26:40.659 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:40.659 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:40.659 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 401305' 00:26:40.659 killing process with pid 401305 00:26:40.659 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 401305 00:26:40.659 19:21:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 401305 00:26:40.659 Received shutdown signal, test time was about 0.830540 seconds 00:26:40.659 00:26:40.659 Latency(us) 00:26:40.659 [2024-12-13T18:21:15.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.659 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.659 Verification LBA range: start 0x0 length 0x400 00:26:40.659 Nvme1n1 : 0.81 371.37 23.21 0.00 0.00 168654.73 3670.02 198810.01 00:26:40.659 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.659 Verification LBA range: start 0x0 length 0x400 00:26:40.659 Nvme2n1 : 0.81 392.93 24.56 0.00 0.00 156034.62 6501.17 159383.55 00:26:40.659 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.659 Verification LBA range: start 0x0 length 0x400 00:26:40.659 Nvme3n1 : 0.82 392.36 24.52 0.00 0.00 153219.73 7497.32 152672.67 00:26:40.659 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.659 Verification LBA range: start 0x0 length 0x400 00:26:40.659 Nvme4n1 : 0.82 391.50 24.47 0.00 0.00 151366.86 8336.18 138412.03 00:26:40.659 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.659 Verification LBA range: start 0x0 length 0x400 00:26:40.659 Nvme5n1 : 0.82 390.65 24.42 0.00 0.00 148658.09 9384.76 124151.40 00:26:40.659 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.659 Verification LBA range: start 0x0 length 0x400 00:26:40.659 Nvme6n1 : 0.82 389.83 24.36 0.00 0.00 145965.55 10433.33 109051.90 00:26:40.659 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.659 Verification LBA range: start 0x0 length 0x400 00:26:40.659 Nvme7n1 : 0.82 389.02 24.31 0.00 0.00 143278.24 11481.91 94791.27 00:26:40.659 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.659 Verification LBA range: start 0x0 length 0x400 00:26:40.659 Nvme8n1 : 0.82 388.20 24.26 0.00 0.00 140575.21 12478.05 104438.17 00:26:40.659 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.659 Verification LBA range: start 0x0 length 0x400 00:26:40.659 Nvme9n1 : 0.83 387.40 24.21 0.00 0.00 137882.83 13421.77 119118.23 00:26:40.659 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:40.659 Verification LBA range: start 0x0 length 0x400 00:26:40.659 Nvme10n1 : 0.83 308.47 19.28 0.00 0.00 168923.39 2936.01 198810.01 00:26:40.659 [2024-12-13T18:21:15.037Z] =================================================================================================================== 00:26:40.659 [2024-12-13T18:21:15.037Z] Total : 3801.74 237.61 0.00 0.00 150998.10 2936.01 198810.01 00:26:40.918 19:21:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:26:41.854 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 401206 00:26:41.854 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:26:41.854 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:41.854 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:41.854 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:41.854 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:41.854 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:41.854 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:26:41.854 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:41.854 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:41.854 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:26:41.854 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:41.854 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:41.854 rmmod nvme_rdma 00:26:41.854 rmmod nvme_fabrics 00:26:41.854 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:41.854 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:26:41.854 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:26:42.113 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 401206 ']' 00:26:42.113 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 401206 00:26:42.113 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 401206 ']' 00:26:42.113 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 401206 00:26:42.113 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:26:42.113 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:42.113 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 401206 00:26:42.113 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:42.113 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:42.113 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 401206' 00:26:42.113 killing process with pid 401206 00:26:42.113 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 401206 00:26:42.113 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 401206 00:26:42.373 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:42.373 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:42.373 00:26:42.373 real 0m5.045s 00:26:42.373 user 0m20.046s 00:26:42.373 sys 0m1.206s 00:26:42.373 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:42.373 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:42.373 ************************************ 00:26:42.373 END TEST nvmf_shutdown_tc2 00:26:42.373 ************************************ 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:42.633 ************************************ 00:26:42.633 START TEST nvmf_shutdown_tc3 00:26:42.633 ************************************ 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:42.633 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:42.633 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:26:42.633 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:42.634 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:42.634 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # rdma_device_init 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:42.634 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:42.634 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:42.634 altname enp217s0f0np0 00:26:42.634 altname ens818f0np0 00:26:42.634 inet 192.168.100.8/24 scope global mlx_0_0 00:26:42.634 valid_lft forever preferred_lft forever 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:42.634 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:42.634 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:42.634 altname enp217s0f1np1 00:26:42.634 altname ens818f1np1 00:26:42.634 inet 192.168.100.9/24 scope global mlx_0_1 00:26:42.634 valid_lft forever preferred_lft forever 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:42.634 19:21:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:42.634 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:42.894 192.168.100.9' 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:42.894 192.168.100.9' 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # head -n 1 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:42.894 192.168.100.9' 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # tail -n +2 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # head -n 1 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=402206 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 402206 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 402206 ']' 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:42.894 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:42.894 [2024-12-13 19:21:17.136873] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:26:42.894 [2024-12-13 19:21:17.136922] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:42.894 [2024-12-13 19:21:17.229636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:42.894 [2024-12-13 19:21:17.251829] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:42.894 [2024-12-13 19:21:17.251866] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:42.894 [2024-12-13 19:21:17.251875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:42.894 [2024-12-13 19:21:17.251883] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:42.894 [2024-12-13 19:21:17.251890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:42.894 [2024-12-13 19:21:17.253641] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:42.894 [2024-12-13 19:21:17.253755] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:26:42.894 [2024-12-13 19:21:17.253845] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:26:42.894 [2024-12-13 19:21:17.253846] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.154 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:43.154 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:26:43.154 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:43.154 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:43.154 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:43.154 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:43.154 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:43.154 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.154 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:43.154 [2024-12-13 19:21:17.414968] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11d4840/0x11d8cf0) succeed. 00:26:43.154 [2024-12-13 19:21:17.424269] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11d5e80/0x121a390) succeed. 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.412 19:21:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:43.412 Malloc1 00:26:43.412 [2024-12-13 19:21:17.666409] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:43.412 Malloc2 00:26:43.412 Malloc3 00:26:43.412 Malloc4 00:26:43.671 Malloc5 00:26:43.671 Malloc6 00:26:43.671 Malloc7 00:26:43.671 Malloc8 00:26:43.671 Malloc9 00:26:43.671 Malloc10 00:26:43.931 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.931 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:43.931 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:43.931 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:43.931 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=402410 00:26:43.931 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 402410 /var/tmp/bdevperf.sock 00:26:43.931 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 402410 ']' 00:26:43.931 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:43.931 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:43.931 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:43.931 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:43.931 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:43.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:43.931 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:43.931 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:26:43.931 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:43.931 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:26:43.931 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:43.931 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:43.931 { 00:26:43.931 "params": { 00:26:43.931 "name": "Nvme$subsystem", 00:26:43.931 "trtype": "$TEST_TRANSPORT", 00:26:43.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.931 "adrfam": "ipv4", 00:26:43.931 "trsvcid": "$NVMF_PORT", 00:26:43.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.931 "hdgst": ${hdgst:-false}, 00:26:43.931 "ddgst": ${ddgst:-false} 00:26:43.931 }, 00:26:43.931 "method": "bdev_nvme_attach_controller" 00:26:43.931 } 00:26:43.931 EOF 00:26:43.931 )") 00:26:43.931 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:43.931 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:43.931 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:43.931 { 00:26:43.931 "params": { 00:26:43.931 "name": "Nvme$subsystem", 00:26:43.931 "trtype": "$TEST_TRANSPORT", 00:26:43.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.931 "adrfam": "ipv4", 00:26:43.931 "trsvcid": "$NVMF_PORT", 00:26:43.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.931 "hdgst": ${hdgst:-false}, 00:26:43.931 "ddgst": ${ddgst:-false} 00:26:43.931 }, 00:26:43.931 "method": "bdev_nvme_attach_controller" 00:26:43.931 } 00:26:43.931 EOF 00:26:43.931 )") 00:26:43.931 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:43.931 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:43.931 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:43.931 { 00:26:43.931 "params": { 00:26:43.931 "name": "Nvme$subsystem", 00:26:43.931 "trtype": "$TEST_TRANSPORT", 00:26:43.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.931 "adrfam": "ipv4", 00:26:43.932 "trsvcid": "$NVMF_PORT", 00:26:43.932 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.932 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.932 "hdgst": ${hdgst:-false}, 00:26:43.932 "ddgst": ${ddgst:-false} 00:26:43.932 }, 00:26:43.932 "method": "bdev_nvme_attach_controller" 00:26:43.932 } 00:26:43.932 EOF 00:26:43.932 )") 00:26:43.932 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:43.932 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:43.932 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:43.932 { 00:26:43.932 "params": { 00:26:43.932 "name": "Nvme$subsystem", 00:26:43.932 "trtype": "$TEST_TRANSPORT", 00:26:43.932 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.932 "adrfam": "ipv4", 00:26:43.932 "trsvcid": "$NVMF_PORT", 00:26:43.932 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.932 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.932 "hdgst": ${hdgst:-false}, 00:26:43.932 "ddgst": ${ddgst:-false} 00:26:43.932 }, 00:26:43.932 "method": "bdev_nvme_attach_controller" 00:26:43.932 } 00:26:43.932 EOF 00:26:43.932 )") 00:26:43.932 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:43.932 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:43.932 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:43.932 { 00:26:43.932 "params": { 00:26:43.932 "name": "Nvme$subsystem", 00:26:43.932 "trtype": "$TEST_TRANSPORT", 00:26:43.932 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.932 "adrfam": "ipv4", 00:26:43.932 "trsvcid": "$NVMF_PORT", 00:26:43.932 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.932 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.932 "hdgst": ${hdgst:-false}, 00:26:43.932 "ddgst": ${ddgst:-false} 00:26:43.932 }, 00:26:43.932 "method": "bdev_nvme_attach_controller" 00:26:43.932 } 00:26:43.932 EOF 00:26:43.932 )") 00:26:43.932 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:43.932 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:43.932 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:43.932 { 00:26:43.932 "params": { 00:26:43.932 "name": "Nvme$subsystem", 00:26:43.932 "trtype": "$TEST_TRANSPORT", 00:26:43.932 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.932 "adrfam": "ipv4", 00:26:43.932 "trsvcid": "$NVMF_PORT", 00:26:43.932 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.932 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.932 "hdgst": ${hdgst:-false}, 00:26:43.932 "ddgst": ${ddgst:-false} 00:26:43.932 }, 00:26:43.932 "method": "bdev_nvme_attach_controller" 00:26:43.932 } 00:26:43.932 EOF 00:26:43.932 )") 00:26:43.932 [2024-12-13 19:21:18.165868] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:26:43.932 [2024-12-13 19:21:18.165922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid402410 ] 00:26:43.932 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:43.932 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:43.932 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:43.932 { 00:26:43.932 "params": { 00:26:43.932 "name": "Nvme$subsystem", 00:26:43.932 "trtype": "$TEST_TRANSPORT", 00:26:43.932 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.932 "adrfam": "ipv4", 00:26:43.932 "trsvcid": "$NVMF_PORT", 00:26:43.932 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.932 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.932 "hdgst": ${hdgst:-false}, 00:26:43.932 "ddgst": ${ddgst:-false} 00:26:43.932 }, 00:26:43.932 "method": "bdev_nvme_attach_controller" 00:26:43.932 } 00:26:43.932 EOF 00:26:43.932 )") 00:26:43.932 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:43.932 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:43.932 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:43.932 { 00:26:43.932 "params": { 00:26:43.932 "name": "Nvme$subsystem", 00:26:43.932 "trtype": "$TEST_TRANSPORT", 00:26:43.932 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.932 "adrfam": "ipv4", 00:26:43.932 "trsvcid": "$NVMF_PORT", 00:26:43.932 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.932 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.932 "hdgst": ${hdgst:-false}, 00:26:43.932 "ddgst": ${ddgst:-false} 00:26:43.932 }, 00:26:43.932 "method": "bdev_nvme_attach_controller" 00:26:43.932 } 00:26:43.932 EOF 00:26:43.932 )") 00:26:43.932 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:43.932 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:43.932 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:43.932 { 00:26:43.932 "params": { 00:26:43.932 "name": "Nvme$subsystem", 00:26:43.932 "trtype": "$TEST_TRANSPORT", 00:26:43.932 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.932 "adrfam": "ipv4", 00:26:43.932 "trsvcid": "$NVMF_PORT", 00:26:43.932 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.932 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.932 "hdgst": ${hdgst:-false}, 00:26:43.932 "ddgst": ${ddgst:-false} 00:26:43.932 }, 00:26:43.932 "method": "bdev_nvme_attach_controller" 00:26:43.932 } 00:26:43.932 EOF 00:26:43.932 )") 00:26:43.932 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:43.932 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:43.932 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:43.932 { 00:26:43.932 "params": { 00:26:43.932 "name": "Nvme$subsystem", 00:26:43.932 "trtype": "$TEST_TRANSPORT", 00:26:43.932 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.932 "adrfam": "ipv4", 00:26:43.932 "trsvcid": "$NVMF_PORT", 00:26:43.932 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.932 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.932 "hdgst": ${hdgst:-false}, 00:26:43.932 "ddgst": ${ddgst:-false} 00:26:43.932 }, 00:26:43.932 "method": "bdev_nvme_attach_controller" 00:26:43.932 } 00:26:43.932 EOF 00:26:43.932 )") 00:26:43.932 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:43.932 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:26:43.932 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:26:43.932 19:21:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:43.932 "params": { 00:26:43.932 "name": "Nvme1", 00:26:43.932 "trtype": "rdma", 00:26:43.932 "traddr": "192.168.100.8", 00:26:43.932 "adrfam": "ipv4", 00:26:43.932 "trsvcid": "4420", 00:26:43.932 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:43.932 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:43.932 "hdgst": false, 00:26:43.932 "ddgst": false 00:26:43.932 }, 00:26:43.932 "method": "bdev_nvme_attach_controller" 00:26:43.932 },{ 00:26:43.932 "params": { 00:26:43.932 "name": "Nvme2", 00:26:43.932 "trtype": "rdma", 00:26:43.932 "traddr": "192.168.100.8", 00:26:43.932 "adrfam": "ipv4", 00:26:43.932 "trsvcid": "4420", 00:26:43.932 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:43.932 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:43.932 "hdgst": false, 00:26:43.932 "ddgst": false 00:26:43.932 }, 00:26:43.932 "method": "bdev_nvme_attach_controller" 00:26:43.932 },{ 00:26:43.932 "params": { 00:26:43.932 "name": "Nvme3", 00:26:43.932 "trtype": "rdma", 00:26:43.932 "traddr": "192.168.100.8", 00:26:43.932 "adrfam": "ipv4", 00:26:43.932 "trsvcid": "4420", 00:26:43.932 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:43.932 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:43.932 "hdgst": false, 00:26:43.932 "ddgst": false 00:26:43.932 }, 00:26:43.932 "method": "bdev_nvme_attach_controller" 00:26:43.932 },{ 00:26:43.932 "params": { 00:26:43.932 "name": "Nvme4", 00:26:43.932 "trtype": "rdma", 00:26:43.932 "traddr": "192.168.100.8", 00:26:43.932 "adrfam": "ipv4", 00:26:43.932 "trsvcid": "4420", 00:26:43.932 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:43.932 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:43.932 "hdgst": false, 00:26:43.932 "ddgst": false 00:26:43.932 }, 00:26:43.932 "method": "bdev_nvme_attach_controller" 00:26:43.932 },{ 00:26:43.932 "params": { 00:26:43.932 "name": "Nvme5", 00:26:43.932 "trtype": "rdma", 00:26:43.932 "traddr": "192.168.100.8", 00:26:43.932 "adrfam": "ipv4", 00:26:43.932 "trsvcid": "4420", 00:26:43.932 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:43.932 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:43.932 "hdgst": false, 00:26:43.932 "ddgst": false 00:26:43.932 }, 00:26:43.932 "method": "bdev_nvme_attach_controller" 00:26:43.932 },{ 00:26:43.932 "params": { 00:26:43.932 "name": "Nvme6", 00:26:43.932 "trtype": "rdma", 00:26:43.932 "traddr": "192.168.100.8", 00:26:43.932 "adrfam": "ipv4", 00:26:43.932 "trsvcid": "4420", 00:26:43.932 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:43.932 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:43.932 "hdgst": false, 00:26:43.933 "ddgst": false 00:26:43.933 }, 00:26:43.933 "method": "bdev_nvme_attach_controller" 00:26:43.933 },{ 00:26:43.933 "params": { 00:26:43.933 "name": "Nvme7", 00:26:43.933 "trtype": "rdma", 00:26:43.933 "traddr": "192.168.100.8", 00:26:43.933 "adrfam": "ipv4", 00:26:43.933 "trsvcid": "4420", 00:26:43.933 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:43.933 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:43.933 "hdgst": false, 00:26:43.933 "ddgst": false 00:26:43.933 }, 00:26:43.933 "method": "bdev_nvme_attach_controller" 00:26:43.933 },{ 00:26:43.933 "params": { 00:26:43.933 "name": "Nvme8", 00:26:43.933 "trtype": "rdma", 00:26:43.933 "traddr": "192.168.100.8", 00:26:43.933 "adrfam": "ipv4", 00:26:43.933 "trsvcid": "4420", 00:26:43.933 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:43.933 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:43.933 "hdgst": false, 00:26:43.933 "ddgst": false 00:26:43.933 }, 00:26:43.933 "method": "bdev_nvme_attach_controller" 00:26:43.933 },{ 00:26:43.933 "params": { 00:26:43.933 "name": "Nvme9", 00:26:43.933 "trtype": "rdma", 00:26:43.933 "traddr": "192.168.100.8", 00:26:43.933 "adrfam": "ipv4", 00:26:43.933 "trsvcid": "4420", 00:26:43.933 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:43.933 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:43.933 "hdgst": false, 00:26:43.933 "ddgst": false 00:26:43.933 }, 00:26:43.933 "method": "bdev_nvme_attach_controller" 00:26:43.933 },{ 00:26:43.933 "params": { 00:26:43.933 "name": "Nvme10", 00:26:43.933 "trtype": "rdma", 00:26:43.933 "traddr": "192.168.100.8", 00:26:43.933 "adrfam": "ipv4", 00:26:43.933 "trsvcid": "4420", 00:26:43.933 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:43.933 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:43.933 "hdgst": false, 00:26:43.933 "ddgst": false 00:26:43.933 }, 00:26:43.933 "method": "bdev_nvme_attach_controller" 00:26:43.933 }' 00:26:43.933 [2024-12-13 19:21:18.262619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.933 [2024-12-13 19:21:18.285100] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.870 Running I/O for 10 seconds... 00:26:44.870 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:44.870 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:26:44.871 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:44.871 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.871 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:45.130 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.130 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:45.130 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:45.130 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:45.130 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:26:45.130 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:26:45.130 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:26:45.130 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:26:45.130 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:45.130 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:45.130 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:45.130 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.130 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:45.130 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.130 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=47 00:26:45.130 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 47 -ge 100 ']' 00:26:45.130 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:45.389 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:45.389 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:45.389 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:45.389 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:45.389 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.389 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:45.649 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.649 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=199 00:26:45.649 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 199 -ge 100 ']' 00:26:45.649 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:26:45.649 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:26:45.649 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:26:45.649 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 402206 00:26:45.649 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 402206 ']' 00:26:45.649 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 402206 00:26:45.649 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:26:45.649 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:45.649 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 402206 00:26:45.649 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:45.649 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:45.649 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 402206' 00:26:45.649 killing process with pid 402206 00:26:45.649 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 402206 00:26:45.649 19:21:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 402206 00:26:46.168 2639.00 IOPS, 164.94 MiB/s [2024-12-13T18:21:20.546Z] 19:21:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:26:46.740 [2024-12-13 19:21:20.987149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.740 [2024-12-13 19:21:20.987187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:cff200 sqhd:3bf4 p:1 m:0 dnr:0 00:26:46.740 [2024-12-13 19:21:20.987200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.740 [2024-12-13 19:21:20.987209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:cff200 sqhd:3bf4 p:1 m:0 dnr:0 00:26:46.740 [2024-12-13 19:21:20.987219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.740 [2024-12-13 19:21:20.987227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:cff200 sqhd:3bf4 p:1 m:0 dnr:0 00:26:46.740 [2024-12-13 19:21:20.987241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.740 [2024-12-13 19:21:20.987250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:cff200 sqhd:3bf4 p:1 m:0 dnr:0 00:26:46.740 [2024-12-13 19:21:20.989569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:46.740 [2024-12-13 19:21:20.989618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:26:46.740 [2024-12-13 19:21:20.989675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.740 [2024-12-13 19:21:20.989709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:6fce p:1 m:0 dnr:0 00:26:46.740 [2024-12-13 19:21:20.989742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.740 [2024-12-13 19:21:20.989772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:6fce p:1 m:0 dnr:0 00:26:46.740 [2024-12-13 19:21:20.989803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.740 [2024-12-13 19:21:20.989834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:6fce p:1 m:0 dnr:0 00:26:46.740 [2024-12-13 19:21:20.989866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.740 [2024-12-13 19:21:20.989896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:6fce p:1 m:0 dnr:0 00:26:46.740 [2024-12-13 19:21:20.991867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:46.740 [2024-12-13 19:21:20.991894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:26:46.740 [2024-12-13 19:21:20.991916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.740 [2024-12-13 19:21:20.991928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:d30c p:1 m:0 dnr:0 00:26:46.740 [2024-12-13 19:21:20.991939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.740 [2024-12-13 19:21:20.991948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:d30c p:1 m:0 dnr:0 00:26:46.740 [2024-12-13 19:21:20.991958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.740 [2024-12-13 19:21:20.991966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:d30c p:1 m:0 dnr:0 00:26:46.740 [2024-12-13 19:21:20.991976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:20.991986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:d30c p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:20.994029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:46.741 [2024-12-13 19:21:20.994048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:26:46.741 [2024-12-13 19:21:20.994066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:20.994080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:ad4e p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:20.994090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:20.994099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:ad4e p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:20.994108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:20.994117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:ad4e p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:20.994126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:20.994136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:ad4e p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:20.996151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:46.741 [2024-12-13 19:21:20.996165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:46.741 [2024-12-13 19:21:20.996182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:20.996193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:2fc8 p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:20.996202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:20.996212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:2fc8 p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:20.996221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:20.996230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:2fc8 p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:20.996239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:20.996249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:2fc8 p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:20.998422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:46.741 [2024-12-13 19:21:20.998436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:26:46.741 [2024-12-13 19:21:20.998451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:20.998461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:8e24 p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:20.998470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:20.998479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:8e24 p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:20.998489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:20.998498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:8e24 p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:20.998510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:20.998519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:8e24 p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:21.000595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:46.741 [2024-12-13 19:21:21.000637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:26:46.741 [2024-12-13 19:21:21.000688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:21.000720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:8c1a p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:21.000753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:21.000783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:8c1a p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:21.000815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:21.000844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:8c1a p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:21.000876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:21.000906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:8c1a p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:21.002752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:46.741 [2024-12-13 19:21:21.002771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:26:46.741 [2024-12-13 19:21:21.002791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:21.002804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:ab5a p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:21.002817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:21.002829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:ab5a p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:21.002841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:21.002853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:ab5a p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:21.002866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:21.002877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:ab5a p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:21.004991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:46.741 [2024-12-13 19:21:21.005007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:26:46.741 [2024-12-13 19:21:21.005027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:21.005040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:c630 p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:21.005062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:21.005074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:c630 p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:21.005087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:21.005104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:c630 p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:21.005119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:21.005130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:c630 p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:21.007248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:46.741 [2024-12-13 19:21:21.007264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:26:46.741 [2024-12-13 19:21:21.007284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:21.007297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:ac82 p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:21.007309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:21.007321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:ac82 p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:21.007334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:21.007345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:ac82 p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:21.007358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.741 [2024-12-13 19:21:21.007370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:18613 cdw0:1 sqhd:ac82 p:1 m:0 dnr:0 00:26:46.741 [2024-12-13 19:21:21.009445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:46.741 [2024-12-13 19:21:21.009462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:26:46.741 [2024-12-13 19:21:21.011462] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:26:46.741 [2024-12-13 19:21:21.013463] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:26:46.741 [2024-12-13 19:21:21.015751] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:26:46.741 [2024-12-13 19:21:21.017888] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:26:46.741 [2024-12-13 19:21:21.019901] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:26:46.741 [2024-12-13 19:21:21.021940] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:26:46.741 [2024-12-13 19:21:21.023759] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:26:46.742 [2024-12-13 19:21:21.023869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b6fc00 len:0x10000 key:0x184a00 00:26:46.742 [2024-12-13 19:21:21.023899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:a036 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.023936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b5fb80 len:0x10000 key:0x184a00 00:26:46.742 [2024-12-13 19:21:21.023959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:a036 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.023990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b4fb00 len:0x10000 key:0x184a00 00:26:46.742 [2024-12-13 19:21:21.024014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:a036 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.024054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b3fa80 len:0x10000 key:0x184a00 00:26:46.742 [2024-12-13 19:21:21.024077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:a036 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.024108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b2fa00 len:0x10000 key:0x184a00 00:26:46.742 [2024-12-13 19:21:21.024131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:a036 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.024161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b1f980 len:0x10000 key:0x184a00 00:26:46.742 [2024-12-13 19:21:21.024185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:a036 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.024215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b0f900 len:0x10000 key:0x184a00 00:26:46.742 [2024-12-13 19:21:21.024238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:a036 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.024270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aff880 len:0x10000 key:0x184a00 00:26:46.742 [2024-12-13 19:21:21.024293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:a036 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.024324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100270f900 len:0x10000 key:0x184d00 00:26:46.742 [2024-12-13 19:21:21.024347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:a036 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.026338] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:26:46.742 [2024-12-13 19:21:21.026435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002adf780 len:0x10000 key:0x184a00 00:26:46.742 [2024-12-13 19:21:21.026460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.026492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002acf700 len:0x10000 key:0x184a00 00:26:46.742 [2024-12-13 19:21:21.026521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.026552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002abf680 len:0x10000 key:0x184a00 00:26:46.742 [2024-12-13 19:21:21.026574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.026605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aaf600 len:0x10000 key:0x184a00 00:26:46.742 [2024-12-13 19:21:21.026628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.026658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a9f580 len:0x10000 key:0x184a00 00:26:46.742 [2024-12-13 19:21:21.026681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.026712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a8f500 len:0x10000 key:0x184a00 00:26:46.742 [2024-12-13 19:21:21.026735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.026766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a7f480 len:0x10000 key:0x184a00 00:26:46.742 [2024-12-13 19:21:21.026788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.026819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a6f400 len:0x10000 key:0x184a00 00:26:46.742 [2024-12-13 19:21:21.026843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.026874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a5f380 len:0x10000 key:0x184a00 00:26:46.742 [2024-12-13 19:21:21.026896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.026928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a4f300 len:0x10000 key:0x184a00 00:26:46.742 [2024-12-13 19:21:21.026951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.026981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a3f280 len:0x10000 key:0x184a00 00:26:46.742 [2024-12-13 19:21:21.027004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.027034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a2f200 len:0x10000 key:0x184a00 00:26:46.742 [2024-12-13 19:21:21.027071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.027101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a1f180 len:0x10000 key:0x184a00 00:26:46.742 [2024-12-13 19:21:21.027127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.027159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a0f100 len:0x10000 key:0x184a00 00:26:46.742 [2024-12-13 19:21:21.027181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.027213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002df0000 len:0x10000 key:0x183c00 00:26:46.742 [2024-12-13 19:21:21.027235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.027267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ddff80 len:0x10000 key:0x183c00 00:26:46.742 [2024-12-13 19:21:21.027289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.027319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dcff00 len:0x10000 key:0x183c00 00:26:46.742 [2024-12-13 19:21:21.027342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.027374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dbfe80 len:0x10000 key:0x183c00 00:26:46.742 [2024-12-13 19:21:21.027397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.027427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dafe00 len:0x10000 key:0x183c00 00:26:46.742 [2024-12-13 19:21:21.027450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.027480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d9fd80 len:0x10000 key:0x183c00 00:26:46.742 [2024-12-13 19:21:21.027503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.027533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d8fd00 len:0x10000 key:0x183c00 00:26:46.742 [2024-12-13 19:21:21.027556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.027587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d7fc80 len:0x10000 key:0x183c00 00:26:46.742 [2024-12-13 19:21:21.027609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.027640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d6fc00 len:0x10000 key:0x183c00 00:26:46.742 [2024-12-13 19:21:21.027663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.742 [2024-12-13 19:21:21.027694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d5fb80 len:0x10000 key:0x183c00 00:26:46.742 [2024-12-13 19:21:21.027716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.027751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d4fb00 len:0x10000 key:0x183c00 00:26:46.743 [2024-12-13 19:21:21.027773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.027804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d3fa80 len:0x10000 key:0x183c00 00:26:46.743 [2024-12-13 19:21:21.027826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.027859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d2fa00 len:0x10000 key:0x183c00 00:26:46.743 [2024-12-13 19:21:21.027882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.027913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d1f980 len:0x10000 key:0x183c00 00:26:46.743 [2024-12-13 19:21:21.027935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.027966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d0f900 len:0x10000 key:0x183c00 00:26:46.743 [2024-12-13 19:21:21.027989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.028019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cff880 len:0x10000 key:0x183c00 00:26:46.743 [2024-12-13 19:21:21.028049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.028081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cef800 len:0x10000 key:0x183c00 00:26:46.743 [2024-12-13 19:21:21.028103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.028134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cdf780 len:0x10000 key:0x183c00 00:26:46.743 [2024-12-13 19:21:21.028157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.028187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ccf700 len:0x10000 key:0x183c00 00:26:46.743 [2024-12-13 19:21:21.028210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.028241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cbf680 len:0x10000 key:0x183c00 00:26:46.743 [2024-12-13 19:21:21.028264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.028294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002caf600 len:0x10000 key:0x183c00 00:26:46.743 [2024-12-13 19:21:21.028317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.028351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c9f580 len:0x10000 key:0x183c00 00:26:46.743 [2024-12-13 19:21:21.028374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.028405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c8f500 len:0x10000 key:0x183c00 00:26:46.743 [2024-12-13 19:21:21.028427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.028458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c7f480 len:0x10000 key:0x183c00 00:26:46.743 [2024-12-13 19:21:21.028480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.028511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c6f400 len:0x10000 key:0x183c00 00:26:46.743 [2024-12-13 19:21:21.028534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.028564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c5f380 len:0x10000 key:0x183c00 00:26:46.743 [2024-12-13 19:21:21.028587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.028619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c4f300 len:0x10000 key:0x183c00 00:26:46.743 [2024-12-13 19:21:21.028642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.028672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c3f280 len:0x10000 key:0x183c00 00:26:46.743 [2024-12-13 19:21:21.028695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.028726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c2f200 len:0x10000 key:0x183c00 00:26:46.743 [2024-12-13 19:21:21.028749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.028779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c1f180 len:0x10000 key:0x183c00 00:26:46.743 [2024-12-13 19:21:21.028801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.028832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c0f100 len:0x10000 key:0x183c00 00:26:46.743 [2024-12-13 19:21:21.028855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.028886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ff0000 len:0x10000 key:0x184b00 00:26:46.743 [2024-12-13 19:21:21.028909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.028943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fdff80 len:0x10000 key:0x184b00 00:26:46.743 [2024-12-13 19:21:21.028966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.028997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fcff00 len:0x10000 key:0x184b00 00:26:46.743 [2024-12-13 19:21:21.029019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.029058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fbfe80 len:0x10000 key:0x184b00 00:26:46.743 [2024-12-13 19:21:21.029082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.029112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fafe00 len:0x10000 key:0x184b00 00:26:46.743 [2024-12-13 19:21:21.029135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.029166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f9fd80 len:0x10000 key:0x184b00 00:26:46.743 [2024-12-13 19:21:21.029189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.029220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f8fd00 len:0x10000 key:0x184b00 00:26:46.743 [2024-12-13 19:21:21.029242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.029273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f7fc80 len:0x10000 key:0x184b00 00:26:46.743 [2024-12-13 19:21:21.029296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.029326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f6fc00 len:0x10000 key:0x184b00 00:26:46.743 [2024-12-13 19:21:21.029349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.029380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f5fb80 len:0x10000 key:0x184b00 00:26:46.743 [2024-12-13 19:21:21.029403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.029433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f4fb00 len:0x10000 key:0x184b00 00:26:46.743 [2024-12-13 19:21:21.029459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.743 [2024-12-13 19:21:21.029491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f3fa80 len:0x10000 key:0x184b00 00:26:46.743 [2024-12-13 19:21:21.029513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.029547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f2fa00 len:0x10000 key:0x184b00 00:26:46.744 [2024-12-13 19:21:21.029569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.029600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f1f980 len:0x10000 key:0x184b00 00:26:46.744 [2024-12-13 19:21:21.029622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.029653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f0f900 len:0x10000 key:0x184b00 00:26:46.744 [2024-12-13 19:21:21.029676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.029706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eff880 len:0x10000 key:0x184b00 00:26:46.744 [2024-12-13 19:21:21.029730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.029760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eef800 len:0x10000 key:0x184b00 00:26:46.744 [2024-12-13 19:21:21.029783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.029813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002edf780 len:0x10000 key:0x184b00 00:26:46.744 [2024-12-13 19:21:21.029836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.029866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aef800 len:0x10000 key:0x184a00 00:26:46.744 [2024-12-13 19:21:21.029889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:5438 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.033521] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:26:46.744 [2024-12-13 19:21:21.033563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d45f000 len:0x10000 key:0x183800 00:26:46.744 [2024-12-13 19:21:21.033587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.033637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d43e000 len:0x10000 key:0x183800 00:26:46.744 [2024-12-13 19:21:21.033663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.033695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d41d000 len:0x10000 key:0x183800 00:26:46.744 [2024-12-13 19:21:21.033718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.033750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3fc000 len:0x10000 key:0x183800 00:26:46.744 [2024-12-13 19:21:21.033778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.033810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3db000 len:0x10000 key:0x183800 00:26:46.744 [2024-12-13 19:21:21.033833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.033865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3ba000 len:0x10000 key:0x183800 00:26:46.744 [2024-12-13 19:21:21.033889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.033922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d399000 len:0x10000 key:0x183800 00:26:46.744 [2024-12-13 19:21:21.033945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.033977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d378000 len:0x10000 key:0x183800 00:26:46.744 [2024-12-13 19:21:21.034000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.034032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d357000 len:0x10000 key:0x183800 00:26:46.744 [2024-12-13 19:21:21.034133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.034166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d336000 len:0x10000 key:0x183800 00:26:46.744 [2024-12-13 19:21:21.034189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.034221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d315000 len:0x10000 key:0x183800 00:26:46.744 [2024-12-13 19:21:21.034244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.034276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2f4000 len:0x10000 key:0x183800 00:26:46.744 [2024-12-13 19:21:21.034298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.034332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2d3000 len:0x10000 key:0x183800 00:26:46.744 [2024-12-13 19:21:21.034360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.034404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2b2000 len:0x10000 key:0x183800 00:26:46.744 [2024-12-13 19:21:21.034429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.034463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d291000 len:0x10000 key:0x183800 00:26:46.744 [2024-12-13 19:21:21.034485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.034522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d270000 len:0x10000 key:0x183800 00:26:46.744 [2024-12-13 19:21:21.034545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.034578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104d7000 len:0x10000 key:0x183800 00:26:46.744 [2024-12-13 19:21:21.034601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.034634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104b6000 len:0x10000 key:0x183800 00:26:46.744 [2024-12-13 19:21:21.034656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.034689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010495000 len:0x10000 key:0x183800 00:26:46.744 [2024-12-13 19:21:21.034713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.034745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010474000 len:0x10000 key:0x183800 00:26:46.744 [2024-12-13 19:21:21.034768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.034800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010453000 len:0x10000 key:0x183800 00:26:46.744 [2024-12-13 19:21:21.034824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.034855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010432000 len:0x10000 key:0x183800 00:26:46.744 [2024-12-13 19:21:21.034878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.034910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010411000 len:0x10000 key:0x183800 00:26:46.744 [2024-12-13 19:21:21.034933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.034965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000103f0000 len:0x10000 key:0x183800 00:26:46.744 [2024-12-13 19:21:21.034988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.744 [2024-12-13 19:21:21.035020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000083ef000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.035052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.035085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000083ce000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.035108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.035149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000083ad000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.035173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.035205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000838c000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.035228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.035261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000836b000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.035283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.035315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000834a000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.035339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.035372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008329000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.035394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.035427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008308000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.035450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.035489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b1b2000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.035511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.035544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b191000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.035567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.035600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b170000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.035622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.035656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dd02000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.035680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.035712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dce1000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.035735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.035770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dcc0000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.035794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.035826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000085ff000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.035849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.035881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000085de000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.035904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.035936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000085bd000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.035959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.035990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000859c000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.036015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.036057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000857b000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.036081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.036112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000855a000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.036136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.036168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008539000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.036191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.036223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008518000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.036247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.036278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000084f7000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.036301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.036333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000084d6000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.036357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.036394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000084b5000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.036421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.036453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008494000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.036477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.036510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008473000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.036537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.036569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008452000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.036593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.036625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008431000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.036648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.036680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008410000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.036704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.036736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000880f000 len:0x10000 key:0x183800 00:26:46.745 [2024-12-13 19:21:21.036759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.745 [2024-12-13 19:21:21.036792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000087ee000 len:0x10000 key:0x183800 00:26:46.746 [2024-12-13 19:21:21.036815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.746 [2024-12-13 19:21:21.036847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000087cd000 len:0x10000 key:0x183800 00:26:46.746 [2024-12-13 19:21:21.036869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.746 [2024-12-13 19:21:21.036901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000087ac000 len:0x10000 key:0x183800 00:26:46.746 [2024-12-13 19:21:21.036924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.746 [2024-12-13 19:21:21.036956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000878b000 len:0x10000 key:0x183800 00:26:46.746 [2024-12-13 19:21:21.036978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.746 [2024-12-13 19:21:21.037010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000876a000 len:0x10000 key:0x183800 00:26:46.746 [2024-12-13 19:21:21.037036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.746 [2024-12-13 19:21:21.037077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008749000 len:0x10000 key:0x183800 00:26:46.746 [2024-12-13 19:21:21.037100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.746 [2024-12-13 19:21:21.037132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008728000 len:0x10000 key:0x183800 00:26:46.746 [2024-12-13 19:21:21.037156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.746 [2024-12-13 19:21:21.037188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008707000 len:0x10000 key:0x183800 00:26:46.746 [2024-12-13 19:21:21.037211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.746 [2024-12-13 19:21:21.037242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000086e6000 len:0x10000 key:0x183800 00:26:46.746 [2024-12-13 19:21:21.037266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18613 cdw0:c700b000 sqhd:9146 p:1 m:0 dnr:0 00:26:46.746 [2024-12-13 19:21:21.063275] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:26:46.746 [2024-12-13 19:21:21.063430] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:26:46.746 [2024-12-13 19:21:21.063469] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:26:46.746 [2024-12-13 19:21:21.063499] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:26:46.746 [2024-12-13 19:21:21.063530] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:26:46.746 [2024-12-13 19:21:21.063562] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:26:46.746 [2024-12-13 19:21:21.063591] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:26:46.746 [2024-12-13 19:21:21.063622] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:26:46.746 [2024-12-13 19:21:21.063653] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:26:46.746 [2024-12-13 19:21:21.063681] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:26:46.746 [2024-12-13 19:21:21.063712] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:26:46.746 [2024-12-13 19:21:21.077977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:46.746 [2024-12-13 19:21:21.078021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:26:46.746 [2024-12-13 19:21:21.078104] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:26:46.746 [2024-12-13 19:21:21.078130] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:26:46.746 [2024-12-13 19:21:21.078158] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:26:46.746 [2024-12-13 19:21:21.078181] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:26:46.746 [2024-12-13 19:21:21.078202] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:26:46.746 [2024-12-13 19:21:21.078223] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:26:46.746 [2024-12-13 19:21:21.078244] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:26:46.746 [2024-12-13 19:21:21.078266] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:26:46.746 [2024-12-13 19:21:21.079058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:26:46.746 [2024-12-13 19:21:21.079089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:26:46.746 [2024-12-13 19:21:21.079113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:26:46.746 [2024-12-13 19:21:21.081834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:26:46.746 task offset: 41472 on job bdev=Nvme1n1 fails 00:26:46.746 00:26:46.746 Latency(us) 00:26:46.746 [2024-12-13T18:21:21.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.746 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:46.746 Job: Nvme1n1 ended in about 1.90 seconds with error 00:26:46.746 Verification LBA range: start 0x0 length 0x400 00:26:46.746 Nvme1n1 : 1.90 157.88 9.87 33.68 0.00 331535.03 2988.44 1053609.16 00:26:46.746 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:46.746 Job: Nvme2n1 ended in about 1.90 seconds with error 00:26:46.746 Verification LBA range: start 0x0 length 0x400 00:26:46.746 Nvme2n1 : 1.90 158.23 9.89 33.64 0.00 327944.73 3040.87 1053609.16 00:26:46.746 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:46.746 Job: Nvme3n1 ended in about 1.90 seconds with error 00:26:46.746 Verification LBA range: start 0x0 length 0x400 00:26:46.746 Nvme3n1 : 1.90 151.24 9.45 33.61 0.00 337407.03 24641.54 1053609.16 00:26:46.746 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:46.746 Job: Nvme4n1 ended in about 1.91 seconds with error 00:26:46.746 Verification LBA range: start 0x0 length 0x400 00:26:46.746 Nvme4n1 : 1.91 142.69 8.92 33.57 0.00 351169.97 29360.13 1053609.16 00:26:46.746 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:46.746 Job: Nvme5n1 ended in about 1.91 seconds with error 00:26:46.746 Verification LBA range: start 0x0 length 0x400 00:26:46.746 Nvme5n1 : 1.91 134.15 8.38 33.54 0.00 365952.70 37539.02 1053609.16 00:26:46.746 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:46.746 Job: Nvme6n1 ended in about 1.91 seconds with error 00:26:46.746 Verification LBA range: start 0x0 length 0x400 00:26:46.746 Nvme6n1 : 1.91 134.03 8.38 33.51 0.00 362999.64 52638.52 1053609.16 00:26:46.746 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:46.746 Job: Nvme7n1 ended in about 1.91 seconds with error 00:26:46.746 Verification LBA range: start 0x0 length 0x400 00:26:46.746 Nvme7n1 : 1.91 133.93 8.37 33.48 0.00 360014.15 65850.57 1053609.16 00:26:46.746 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:46.746 Job: Nvme8n1 ended in about 1.91 seconds with error 00:26:46.746 Verification LBA range: start 0x0 length 0x400 00:26:46.746 Nvme8n1 : 1.91 136.98 8.56 33.46 0.00 349238.21 35861.30 1067030.94 00:26:46.746 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:46.746 Job: Nvme9n1 ended in about 1.87 seconds with error 00:26:46.746 Verification LBA range: start 0x0 length 0x400 00:26:46.746 Nvme9n1 : 1.87 137.19 8.57 34.30 0.00 345621.14 39007.03 1067030.94 00:26:46.746 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:46.746 Job: Nvme10n1 ended in about 1.87 seconds with error 00:26:46.746 Verification LBA range: start 0x0 length 0x400 00:26:46.746 Nvme10n1 : 1.87 102.49 6.41 34.16 0.00 429532.77 39007.03 1067030.94 00:26:46.746 [2024-12-13T18:21:21.124Z] =================================================================================================================== 00:26:46.746 [2024-12-13T18:21:21.124Z] Total : 1388.82 86.80 336.96 0.00 353772.20 2988.44 1067030.94 00:26:46.746 [2024-12-13 19:21:21.109595] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:46.746 [2024-12-13 19:21:21.109620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:26:46.746 [2024-12-13 19:21:21.109635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:26:46.746 [2024-12-13 19:21:21.109648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:26:46.746 [2024-12-13 19:21:21.109658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:26:47.006 [2024-12-13 19:21:21.116536] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:47.006 [2024-12-13 19:21:21.116559] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:47.006 [2024-12-13 19:21:21.116567] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:26:47.006 [2024-12-13 19:21:21.116662] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:47.006 [2024-12-13 19:21:21.116674] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:47.006 [2024-12-13 19:21:21.116681] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170e3200 00:26:47.006 [2024-12-13 19:21:21.122665] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:47.006 [2024-12-13 19:21:21.122684] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:47.006 [2024-12-13 19:21:21.122692] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170d8d40 00:26:47.006 [2024-12-13 19:21:21.122779] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:47.006 [2024-12-13 19:21:21.122790] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:47.006 [2024-12-13 19:21:21.122797] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170bd840 00:26:47.006 [2024-12-13 19:21:21.122882] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:47.006 [2024-12-13 19:21:21.122893] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:47.006 [2024-12-13 19:21:21.122900] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001709d5c0 00:26:47.006 [2024-12-13 19:21:21.123808] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:47.006 [2024-12-13 19:21:21.123823] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:47.006 [2024-12-13 19:21:21.123830] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170a6e00 00:26:47.006 [2024-12-13 19:21:21.123926] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:47.006 [2024-12-13 19:21:21.123937] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:47.006 [2024-12-13 19:21:21.123944] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170cc0c0 00:26:47.007 [2024-12-13 19:21:21.124048] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:47.007 [2024-12-13 19:21:21.124062] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:47.007 [2024-12-13 19:21:21.124072] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200017052240 00:26:47.007 [2024-12-13 19:21:21.124147] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:47.007 [2024-12-13 19:21:21.124160] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:47.007 [2024-12-13 19:21:21.124170] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001702f140 00:26:47.007 [2024-12-13 19:21:21.124262] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:47.007 [2024-12-13 19:21:21.124276] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:47.007 [2024-12-13 19:21:21.124285] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200017089c00 00:26:47.266 19:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 402410 00:26:47.266 19:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:26:47.266 19:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 402410 00:26:47.266 19:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:26:47.266 19:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:47.266 19:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:26:47.266 19:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:47.266 19:21:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 402410 00:26:47.835 [2024-12-13 19:21:22.120893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:47.835 [2024-12-13 19:21:22.120955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:47.835 [2024-12-13 19:21:22.122505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:47.835 [2024-12-13 19:21:22.122549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:26:47.835 [2024-12-13 19:21:22.122674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:47.835 [2024-12-13 19:21:22.122684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:47.835 [2024-12-13 19:21:22.122694] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:26:47.835 [2024-12-13 19:21:22.122705] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:47.835 [2024-12-13 19:21:22.122718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:26:47.835 [2024-12-13 19:21:22.122730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:26:47.835 [2024-12-13 19:21:22.122738] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] already in failed state 00:26:47.835 [2024-12-13 19:21:22.122746] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:26:47.835 [2024-12-13 19:21:22.126809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:47.835 [2024-12-13 19:21:22.126859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:26:47.835 [2024-12-13 19:21:22.128667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:47.835 [2024-12-13 19:21:22.128708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:26:47.835 [2024-12-13 19:21:22.130249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:47.835 [2024-12-13 19:21:22.130290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:26:47.835 [2024-12-13 19:21:22.131713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:47.835 [2024-12-13 19:21:22.131753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:26:47.836 [2024-12-13 19:21:22.133188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:47.836 [2024-12-13 19:21:22.133228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:26:47.836 [2024-12-13 19:21:22.134748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:47.836 [2024-12-13 19:21:22.134788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:26:47.836 [2024-12-13 19:21:22.136176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:47.836 [2024-12-13 19:21:22.136219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:26:47.836 [2024-12-13 19:21:22.137996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:47.836 [2024-12-13 19:21:22.138037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:26:47.836 [2024-12-13 19:21:22.138074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:26:47.836 [2024-12-13 19:21:22.138103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:26:47.836 [2024-12-13 19:21:22.138132] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] already in failed state 00:26:47.836 [2024-12-13 19:21:22.138164] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:26:47.836 [2024-12-13 19:21:22.138202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:26:47.836 [2024-12-13 19:21:22.138230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:26:47.836 [2024-12-13 19:21:22.138259] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] already in failed state 00:26:47.836 [2024-12-13 19:21:22.138288] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:26:47.836 [2024-12-13 19:21:22.138331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:26:47.836 [2024-12-13 19:21:22.138360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:26:47.836 [2024-12-13 19:21:22.138388] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] already in failed state 00:26:47.836 [2024-12-13 19:21:22.138418] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:26:47.836 [2024-12-13 19:21:22.138681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:26:47.836 [2024-12-13 19:21:22.138717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:26:47.836 [2024-12-13 19:21:22.138745] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] already in failed state 00:26:47.836 [2024-12-13 19:21:22.138774] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:26:47.836 [2024-12-13 19:21:22.138811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:26:47.836 [2024-12-13 19:21:22.138840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:26:47.836 [2024-12-13 19:21:22.138868] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] already in failed state 00:26:47.836 [2024-12-13 19:21:22.138897] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:26:47.836 [2024-12-13 19:21:22.138932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:26:47.836 [2024-12-13 19:21:22.138962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:26:47.836 [2024-12-13 19:21:22.138989] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] already in failed state 00:26:47.836 [2024-12-13 19:21:22.139018] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:26:47.836 [2024-12-13 19:21:22.139071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:26:47.836 [2024-12-13 19:21:22.139101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:26:47.836 [2024-12-13 19:21:22.139130] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] already in failed state 00:26:47.836 [2024-12-13 19:21:22.139159] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:26:47.836 [2024-12-13 19:21:22.139194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:26:47.836 [2024-12-13 19:21:22.139222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:26:47.836 [2024-12-13 19:21:22.139250] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] already in failed state 00:26:47.836 [2024-12-13 19:21:22.139280] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:48.096 rmmod nvme_rdma 00:26:48.096 rmmod nvme_fabrics 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 402206 ']' 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 402206 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 402206 ']' 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 402206 00:26:48.096 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (402206) - No such process 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 402206 is not found' 00:26:48.096 Process with pid 402206 is not found 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:48.096 00:26:48.096 real 0m5.557s 00:26:48.096 user 0m16.109s 00:26:48.096 sys 0m1.364s 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:48.096 ************************************ 00:26:48.096 END TEST nvmf_shutdown_tc3 00:26:48.096 ************************************ 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ mlx5 == \e\8\1\0 ]] 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:26:48.096 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:48.097 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:48.097 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:48.097 ************************************ 00:26:48.097 START TEST nvmf_shutdown_tc4 00:26:48.097 ************************************ 00:26:48.097 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:26:48.097 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:26:48.097 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:48.097 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:48.097 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:48.097 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:48.097 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:48.097 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:48.097 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.097 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:48.097 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.097 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:48.097 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:48.358 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:48.358 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:48.358 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:48.358 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # rdma_device_init 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # uname 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:48.358 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:48.359 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:48.359 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:48.359 altname enp217s0f0np0 00:26:48.359 altname ens818f0np0 00:26:48.359 inet 192.168.100.8/24 scope global mlx_0_0 00:26:48.359 valid_lft forever preferred_lft forever 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:48.359 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:48.359 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:48.359 altname enp217s0f1np1 00:26:48.359 altname ens818f1np1 00:26:48.359 inet 192.168.100.9/24 scope global mlx_0_1 00:26:48.359 valid_lft forever preferred_lft forever 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:48.359 192.168.100.9' 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:48.359 192.168.100.9' 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # head -n 1 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:48.359 192.168.100.9' 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # tail -n +2 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # head -n 1 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:48.359 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:48.618 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:48.618 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:48.618 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:48.618 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:48.618 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=403249 00:26:48.618 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 403249 00:26:48.618 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:48.618 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 403249 ']' 00:26:48.618 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.618 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:48.618 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.619 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:48.619 19:21:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:48.619 [2024-12-13 19:21:22.801866] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:26:48.619 [2024-12-13 19:21:22.801923] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.619 [2024-12-13 19:21:22.895408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:48.619 [2024-12-13 19:21:22.918191] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:48.619 [2024-12-13 19:21:22.918228] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:48.619 [2024-12-13 19:21:22.918237] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:48.619 [2024-12-13 19:21:22.918246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:48.619 [2024-12-13 19:21:22.918253] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:48.619 [2024-12-13 19:21:22.920033] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:48.619 [2024-12-13 19:21:22.920144] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:26:48.619 [2024-12-13 19:21:22.920252] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.619 [2024-12-13 19:21:22.920253] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:26:48.877 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:48.878 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:26:48.878 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:48.878 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:48.878 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:48.878 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:48.878 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:48.878 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.878 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:48.878 [2024-12-13 19:21:23.093680] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c48840/0x1c4ccf0) succeed. 00:26:48.878 [2024-12-13 19:21:23.102863] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c49e80/0x1c8e390) succeed. 00:26:48.878 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.878 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:48.878 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:48.878 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:48.878 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:48.878 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:48.878 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:48.878 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:48.878 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:48.878 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:48.878 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:48.878 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:48.878 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:48.878 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:49.137 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:49.137 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:49.137 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:49.137 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:49.137 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:49.137 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:49.137 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:49.137 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:49.137 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:49.137 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:49.137 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:49.137 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:49.137 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:49.137 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.137 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:49.137 Malloc1 00:26:49.137 [2024-12-13 19:21:23.330153] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:49.137 Malloc2 00:26:49.137 Malloc3 00:26:49.137 Malloc4 00:26:49.137 Malloc5 00:26:49.395 Malloc6 00:26:49.395 Malloc7 00:26:49.395 Malloc8 00:26:49.395 Malloc9 00:26:49.395 Malloc10 00:26:49.395 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.395 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:49.395 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:49.396 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:49.655 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=403488 00:26:49.655 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:26:49.655 19:21:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' -P 4 00:26:49.655 [2024-12-13 19:21:23.875702] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:54.930 19:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:54.930 19:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 403249 00:26:54.930 19:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 403249 ']' 00:26:54.930 19:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 403249 00:26:54.930 19:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:26:54.930 19:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:54.930 19:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 403249 00:26:54.930 19:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:54.930 19:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:54.930 19:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 403249' 00:26:54.930 killing process with pid 403249 00:26:54.930 19:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 403249 00:26:54.930 19:21:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 403249 00:26:54.930 NVMe io qpair process completion error 00:26:54.930 NVMe io qpair process completion error 00:26:54.930 NVMe io qpair process completion error 00:26:54.930 NVMe io qpair process completion error 00:26:54.930 NVMe io qpair process completion error 00:26:54.930 NVMe io qpair process completion error 00:26:54.930 NVMe io qpair process completion error 00:26:54.930 starting I/O failed: -6 00:26:54.930 starting I/O failed: -6 00:26:54.930 starting I/O failed: -6 00:26:54.930 starting I/O failed: -6 00:26:54.930 NVMe io qpair process completion error 00:26:54.930 NVMe io qpair process completion error 00:26:54.930 NVMe io qpair process completion error 00:26:55.189 19:21:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 starting I/O failed: -6 00:26:55.759 [2024-12-13 19:21:29.960960] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Submitting Keep Alive failed 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.759 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 [2024-12-13 19:21:29.973094] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Submitting Keep Alive failed 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.760 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 [2024-12-13 19:21:29.986270] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 starting I/O failed: -6 00:26:55.761 [2024-12-13 19:21:29.999346] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Submitting Keep Alive failed 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.761 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 starting I/O failed: -6 00:26:55.762 [2024-12-13 19:21:30.011582] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Submitting Keep Alive failed 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 [2024-12-13 19:21:30.024572] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Submitting Keep Alive failed 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.762 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 [2024-12-13 19:21:30.037179] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Submitting Keep Alive failed 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.763 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 [2024-12-13 19:21:30.049149] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Submitting Keep Alive failed 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 Write completed with error (sct=0, sc=8) 00:26:55.764 NVMe io qpair process completion error 00:26:55.764 NVMe io qpair process completion error 00:26:56.332 19:21:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 403488 00:26:56.332 19:21:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:26:56.332 19:21:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 403488 00:26:56.333 19:21:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:26:56.333 19:21:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:56.333 19:21:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:26:56.333 19:21:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:56.333 19:21:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 403488 00:26:56.902 [2024-12-13 19:21:31.054488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:56.902 [2024-12-13 19:21:31.054558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:26:56.902 [2024-12-13 19:21:31.056925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:56.902 [2024-12-13 19:21:31.056970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:26:56.902 [2024-12-13 19:21:31.059516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:56.902 [2024-12-13 19:21:31.059559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:56.902 Write completed with error (sct=0, sc=8) 00:26:56.902 Write completed with error (sct=0, sc=8) 00:26:56.902 Write completed with error (sct=0, sc=8) 00:26:56.902 Write completed with error (sct=0, sc=8) 00:26:56.902 Write completed with error (sct=0, sc=8) 00:26:56.902 Write completed with error (sct=0, sc=8) 00:26:56.902 Write completed with error (sct=0, sc=8) 00:26:56.902 Write completed with error (sct=0, sc=8) 00:26:56.902 Write completed with error (sct=0, sc=8) 00:26:56.902 Write completed with error (sct=0, sc=8) 00:26:56.902 Write completed with error (sct=0, sc=8) 00:26:56.902 Write completed with error (sct=0, sc=8) 00:26:56.902 Write completed with error (sct=0, sc=8) 00:26:56.902 Write completed with error (sct=0, sc=8) 00:26:56.902 Write completed with error (sct=0, sc=8) 00:26:56.902 [2024-12-13 19:21:31.061726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:56.902 [2024-12-13 19:21:31.061767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 [2024-12-13 19:21:31.064139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:56.903 [2024-12-13 19:21:31.064180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 [2024-12-13 19:21:31.066808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 [2024-12-13 19:21:31.066854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 [2024-12-13 19:21:31.069509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:56.903 [2024-12-13 19:21:31.069550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 [2024-12-13 19:21:31.072002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:56.903 [2024-12-13 19:21:31.072149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 [2024-12-13 19:21:31.074627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:56.903 [2024-12-13 19:21:31.074669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.903 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 Write completed with error (sct=0, sc=8) 00:26:56.904 [2024-12-13 19:21:31.114921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:26:56.904 [2024-12-13 19:21:31.114985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:26:56.904 Initializing NVMe Controllers 00:26:56.904 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode2 00:26:56.904 Controller IO queue size 128, less than required. 00:26:56.904 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:56.904 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode3 00:26:56.904 Controller IO queue size 128, less than required. 00:26:56.904 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:56.904 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:56.904 Controller IO queue size 128, less than required. 00:26:56.904 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:56.904 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode4 00:26:56.904 Controller IO queue size 128, less than required. 00:26:56.904 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:56.904 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode5 00:26:56.904 Controller IO queue size 128, less than required. 00:26:56.904 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:56.904 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode6 00:26:56.904 Controller IO queue size 128, less than required. 00:26:56.904 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:56.904 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode10 00:26:56.904 Controller IO queue size 128, less than required. 00:26:56.904 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:56.904 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode7 00:26:56.904 Controller IO queue size 128, less than required. 00:26:56.904 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:56.904 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode8 00:26:56.904 Controller IO queue size 128, less than required. 00:26:56.904 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:56.904 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode9 00:26:56.904 Controller IO queue size 128, less than required. 00:26:56.904 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:56.904 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:26:56.904 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:26:56.904 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:56.904 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:26:56.904 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:26:56.904 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:26:56.904 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:26:56.904 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:26:56.904 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:26:56.904 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:26:56.904 Initialization complete. Launching workers. 00:26:56.904 ======================================================== 00:26:56.904 Latency(us) 00:26:56.904 Device Information : IOPS MiB/s Average min max 00:26:56.904 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1549.94 66.60 82015.07 111.48 1311525.83 00:26:56.904 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1550.77 66.63 81283.84 114.74 1224099.24 00:26:56.904 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1543.42 66.32 81784.85 114.38 1236317.33 00:26:56.904 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1556.79 66.89 81205.12 113.13 1239760.73 00:26:56.904 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1553.95 66.77 81474.43 111.09 1236230.13 00:26:56.904 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1557.29 66.91 81412.08 110.25 1242850.36 00:26:56.904 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1536.23 66.01 82653.88 110.49 1292083.53 00:26:56.904 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1556.62 66.89 81253.31 111.17 1269329.39 00:26:56.904 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1567.15 67.34 95002.99 113.02 2246556.71 00:26:56.904 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1567.99 67.37 95058.74 111.61 2132502.03 00:26:56.904 ======================================================== 00:26:56.904 Total : 15540.16 667.74 84336.32 110.25 2246556.71 00:26:56.904 00:26:56.904 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:26:56.904 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:26:56.904 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:56.904 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:56.904 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:56.904 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:26:56.904 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:56.904 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:56.904 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:56.904 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:56.904 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:56.904 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:26:56.904 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:56.904 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:56.904 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:26:56.904 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:56.904 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:56.904 rmmod nvme_rdma 00:26:56.904 rmmod nvme_fabrics 00:26:56.904 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:56.904 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:26:56.904 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:26:56.904 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 403249 ']' 00:26:56.904 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 403249 00:26:56.904 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 403249 ']' 00:26:56.904 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 403249 00:26:56.905 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (403249) - No such process 00:26:56.905 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 403249 is not found' 00:26:56.905 Process with pid 403249 is not found 00:26:56.905 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:56.905 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:56.905 00:26:56.905 real 0m8.749s 00:26:56.905 user 0m32.090s 00:26:56.905 sys 0m1.478s 00:26:56.905 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:56.905 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:56.905 ************************************ 00:26:56.905 END TEST nvmf_shutdown_tc4 00:26:56.905 ************************************ 00:26:56.905 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:26:56.905 00:26:56.905 real 0m33.532s 00:26:56.905 user 1m37.215s 00:26:56.905 sys 0m11.137s 00:26:56.905 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:56.905 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:56.905 ************************************ 00:26:56.905 END TEST nvmf_shutdown 00:26:56.905 ************************************ 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:57.164 ************************************ 00:26:57.164 START TEST nvmf_nsid 00:26:57.164 ************************************ 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:26:57.164 * Looking for test storage... 00:26:57.164 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:26:57.164 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:57.424 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:26:57.424 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:26:57.424 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:26:57.424 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:26:57.424 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:57.424 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:26:57.424 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:26:57.424 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:57.424 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:57.424 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:26:57.424 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:57.424 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:57.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.424 --rc genhtml_branch_coverage=1 00:26:57.424 --rc genhtml_function_coverage=1 00:26:57.424 --rc genhtml_legend=1 00:26:57.424 --rc geninfo_all_blocks=1 00:26:57.424 --rc geninfo_unexecuted_blocks=1 00:26:57.424 00:26:57.424 ' 00:26:57.424 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:57.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.424 --rc genhtml_branch_coverage=1 00:26:57.424 --rc genhtml_function_coverage=1 00:26:57.424 --rc genhtml_legend=1 00:26:57.424 --rc geninfo_all_blocks=1 00:26:57.424 --rc geninfo_unexecuted_blocks=1 00:26:57.424 00:26:57.424 ' 00:26:57.424 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:57.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.424 --rc genhtml_branch_coverage=1 00:26:57.424 --rc genhtml_function_coverage=1 00:26:57.424 --rc genhtml_legend=1 00:26:57.424 --rc geninfo_all_blocks=1 00:26:57.424 --rc geninfo_unexecuted_blocks=1 00:26:57.424 00:26:57.424 ' 00:26:57.424 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:57.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.424 --rc genhtml_branch_coverage=1 00:26:57.424 --rc genhtml_function_coverage=1 00:26:57.424 --rc genhtml_legend=1 00:26:57.424 --rc geninfo_all_blocks=1 00:26:57.424 --rc geninfo_unexecuted_blocks=1 00:26:57.424 00:26:57.424 ' 00:26:57.424 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:57.424 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:26:57.424 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:57.425 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:26:57.425 19:21:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:05.549 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.549 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:27:05.549 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:05.549 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:05.549 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:05.549 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:05.549 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:05.549 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:05.550 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:05.550 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:05.550 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:05.550 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@448 -- # rdma_device_init 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # uname 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@67 -- # modprobe ib_core 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:27:05.550 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:27:05.551 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:05.551 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:05.551 altname enp217s0f0np0 00:27:05.551 altname ens818f0np0 00:27:05.551 inet 192.168.100.8/24 scope global mlx_0_0 00:27:05.551 valid_lft forever preferred_lft forever 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:27:05.551 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:05.551 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:05.551 altname enp217s0f1np1 00:27:05.551 altname ens818f1np1 00:27:05.551 inet 192.168.100.9/24 scope global mlx_0_1 00:27:05.551 valid_lft forever preferred_lft forever 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:27:05.551 192.168.100.9' 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:27:05.551 192.168.100.9' 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # head -n 1 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:27:05.551 192.168.100.9' 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # tail -n +2 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # head -n 1 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=408034 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 408034 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 408034 ']' 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:05.551 19:21:38 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:05.551 [2024-12-13 19:21:38.876064] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:27:05.551 [2024-12-13 19:21:38.876112] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.551 [2024-12-13 19:21:38.965631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.551 [2024-12-13 19:21:38.986948] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.551 [2024-12-13 19:21:38.986983] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.551 [2024-12-13 19:21:38.986993] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.551 [2024-12-13 19:21:38.987001] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.551 [2024-12-13 19:21:38.987008] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.551 [2024-12-13 19:21:38.987617] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.551 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:05.551 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:27:05.551 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:05.551 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:05.551 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:05.551 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:05.551 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:05.551 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=408179 00:27:05.551 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:27:05.551 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=192.168.100.8 00:27:05.551 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:27:05.551 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:27:05.551 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.551 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.551 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.551 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.551 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:27:05.551 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:27:05.551 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:27:05.551 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:27:05.552 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:27:05.552 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=192.168.100.8 00:27:05.552 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:27:05.552 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=8afe2676-362a-429e-988d-b947f67c7c9f 00:27:05.552 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:27:05.552 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=51b84384-218e-48aa-8868-e5a31e73e731 00:27:05.552 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:27:05.552 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=5ebf6d9a-54aa-41e5-b431-376dbc391a66 00:27:05.552 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:27:05.552 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.552 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:05.552 null0 00:27:05.552 [2024-12-13 19:21:39.173481] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:27:05.552 [2024-12-13 19:21:39.173532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid408179 ] 00:27:05.552 null1 00:27:05.552 null2 00:27:05.552 [2024-12-13 19:21:39.208624] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17c1810/0x169e040) succeed. 00:27:05.552 [2024-12-13 19:21:39.217657] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17c2c70/0x16df6e0) succeed. 00:27:05.552 [2024-12-13 19:21:39.267935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.552 [2024-12-13 19:21:39.268276] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:05.552 [2024-12-13 19:21:39.290943] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.552 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.552 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 408179 /var/tmp/tgt2.sock 00:27:05.552 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 408179 ']' 00:27:05.552 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:27:05.552 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:05.552 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:27:05.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:27:05.552 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:05.552 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:05.552 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:05.552 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:27:05.552 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:27:05.552 [2024-12-13 19:21:39.838022] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa19e30/0x842de0) succeed. 00:27:05.552 [2024-12-13 19:21:39.848947] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xab0c20/0x884480) succeed. 00:27:05.552 [2024-12-13 19:21:39.891015] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:27:05.552 nvme0n1 nvme0n2 00:27:05.552 nvme1n1 00:27:05.811 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:27:05.811 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:27:05.811 19:21:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t rdma -a 192.168.100.8 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 8afe2676-362a-429e-988d-b947f67c7c9f 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8afe2676362a429e988db947f67c7c9f 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8AFE2676362A429E988DB947F67C7C9F 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 8AFE2676362A429E988DB947F67C7C9F == \8\A\F\E\2\6\7\6\3\6\2\A\4\2\9\E\9\8\8\D\B\9\4\7\F\6\7\C\7\C\9\F ]] 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 51b84384-218e-48aa-8868-e5a31e73e731 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=51b84384218e48aa8868e5a31e73e731 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 51B84384218E48AA8868E5A31E73E731 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 51B84384218E48AA8868E5A31E73E731 == \5\1\B\8\4\3\8\4\2\1\8\E\4\8\A\A\8\8\6\8\E\5\A\3\1\E\7\3\E\7\3\1 ]] 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 5ebf6d9a-54aa-41e5-b431-376dbc391a66 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:27:13.931 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:27:13.932 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:27:13.932 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:27:13.932 19:21:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:27:13.932 19:21:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5ebf6d9a54aa41e5b431376dbc391a66 00:27:13.932 19:21:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5EBF6D9A54AA41E5B431376DBC391A66 00:27:13.932 19:21:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 5EBF6D9A54AA41E5B431376DBC391A66 == \5\E\B\F\6\D\9\A\5\4\A\A\4\1\E\5\B\4\3\1\3\7\6\D\B\C\3\9\1\A\6\6 ]] 00:27:13.932 19:21:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:27:20.501 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:27:20.501 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:27:20.501 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 408179 00:27:20.501 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 408179 ']' 00:27:20.501 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 408179 00:27:20.501 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:27:20.501 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 408179 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 408179' 00:27:20.502 killing process with pid 408179 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 408179 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 408179 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:27:20.502 rmmod nvme_rdma 00:27:20.502 rmmod nvme_fabrics 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 408034 ']' 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 408034 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 408034 ']' 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 408034 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 408034 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 408034' 00:27:20.502 killing process with pid 408034 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 408034 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 408034 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:27:20.502 00:27:20.502 real 0m23.473s 00:27:20.502 user 0m33.194s 00:27:20.502 sys 0m6.832s 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:20.502 ************************************ 00:27:20.502 END TEST nvmf_nsid 00:27:20.502 ************************************ 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:27:20.502 00:27:20.502 real 15m56.928s 00:27:20.502 user 47m56.374s 00:27:20.502 sys 3m24.533s 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:20.502 19:21:54 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:20.502 ************************************ 00:27:20.502 END TEST nvmf_target_extra 00:27:20.502 ************************************ 00:27:20.761 19:21:54 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:27:20.761 19:21:54 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:20.761 19:21:54 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:20.761 19:21:54 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:20.761 ************************************ 00:27:20.761 START TEST nvmf_host 00:27:20.761 ************************************ 00:27:20.761 19:21:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:27:20.761 * Looking for test storage... 00:27:20.761 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:27:20.761 19:21:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:20.761 19:21:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:27:20.761 19:21:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:21.020 19:21:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:21.020 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:21.020 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:21.020 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:21.020 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:21.020 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:21.020 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:21.020 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:21.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.021 --rc genhtml_branch_coverage=1 00:27:21.021 --rc genhtml_function_coverage=1 00:27:21.021 --rc genhtml_legend=1 00:27:21.021 --rc geninfo_all_blocks=1 00:27:21.021 --rc geninfo_unexecuted_blocks=1 00:27:21.021 00:27:21.021 ' 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:21.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.021 --rc genhtml_branch_coverage=1 00:27:21.021 --rc genhtml_function_coverage=1 00:27:21.021 --rc genhtml_legend=1 00:27:21.021 --rc geninfo_all_blocks=1 00:27:21.021 --rc geninfo_unexecuted_blocks=1 00:27:21.021 00:27:21.021 ' 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:21.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.021 --rc genhtml_branch_coverage=1 00:27:21.021 --rc genhtml_function_coverage=1 00:27:21.021 --rc genhtml_legend=1 00:27:21.021 --rc geninfo_all_blocks=1 00:27:21.021 --rc geninfo_unexecuted_blocks=1 00:27:21.021 00:27:21.021 ' 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:21.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.021 --rc genhtml_branch_coverage=1 00:27:21.021 --rc genhtml_function_coverage=1 00:27:21.021 --rc genhtml_legend=1 00:27:21.021 --rc geninfo_all_blocks=1 00:27:21.021 --rc geninfo_unexecuted_blocks=1 00:27:21.021 00:27:21.021 ' 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:21.021 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.021 ************************************ 00:27:21.021 START TEST nvmf_multicontroller 00:27:21.021 ************************************ 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:27:21.021 * Looking for test storage... 00:27:21.021 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:27:21.021 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:21.280 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:21.280 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:21.280 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:21.280 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:21.280 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:27:21.280 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:27:21.280 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:27:21.280 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:27:21.280 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:27:21.280 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:21.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.281 --rc genhtml_branch_coverage=1 00:27:21.281 --rc genhtml_function_coverage=1 00:27:21.281 --rc genhtml_legend=1 00:27:21.281 --rc geninfo_all_blocks=1 00:27:21.281 --rc geninfo_unexecuted_blocks=1 00:27:21.281 00:27:21.281 ' 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:21.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.281 --rc genhtml_branch_coverage=1 00:27:21.281 --rc genhtml_function_coverage=1 00:27:21.281 --rc genhtml_legend=1 00:27:21.281 --rc geninfo_all_blocks=1 00:27:21.281 --rc geninfo_unexecuted_blocks=1 00:27:21.281 00:27:21.281 ' 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:21.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.281 --rc genhtml_branch_coverage=1 00:27:21.281 --rc genhtml_function_coverage=1 00:27:21.281 --rc genhtml_legend=1 00:27:21.281 --rc geninfo_all_blocks=1 00:27:21.281 --rc geninfo_unexecuted_blocks=1 00:27:21.281 00:27:21.281 ' 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:21.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.281 --rc genhtml_branch_coverage=1 00:27:21.281 --rc genhtml_function_coverage=1 00:27:21.281 --rc genhtml_legend=1 00:27:21.281 --rc geninfo_all_blocks=1 00:27:21.281 --rc geninfo_unexecuted_blocks=1 00:27:21.281 00:27:21.281 ' 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:21.281 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:27:21.281 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:27:21.281 00:27:21.281 real 0m0.227s 00:27:21.281 user 0m0.115s 00:27:21.281 sys 0m0.130s 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:21.281 ************************************ 00:27:21.281 END TEST nvmf_multicontroller 00:27:21.281 ************************************ 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:21.281 19:21:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.282 ************************************ 00:27:21.282 START TEST nvmf_aer 00:27:21.282 ************************************ 00:27:21.282 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:27:21.282 * Looking for test storage... 00:27:21.282 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:21.282 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:21.282 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:27:21.282 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:21.541 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:21.541 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:21.541 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:21.541 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:21.541 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:27:21.541 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:27:21.541 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:27:21.541 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:27:21.541 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:27:21.541 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:27:21.541 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:27:21.541 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:21.541 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:21.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.542 --rc genhtml_branch_coverage=1 00:27:21.542 --rc genhtml_function_coverage=1 00:27:21.542 --rc genhtml_legend=1 00:27:21.542 --rc geninfo_all_blocks=1 00:27:21.542 --rc geninfo_unexecuted_blocks=1 00:27:21.542 00:27:21.542 ' 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:21.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.542 --rc genhtml_branch_coverage=1 00:27:21.542 --rc genhtml_function_coverage=1 00:27:21.542 --rc genhtml_legend=1 00:27:21.542 --rc geninfo_all_blocks=1 00:27:21.542 --rc geninfo_unexecuted_blocks=1 00:27:21.542 00:27:21.542 ' 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:21.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.542 --rc genhtml_branch_coverage=1 00:27:21.542 --rc genhtml_function_coverage=1 00:27:21.542 --rc genhtml_legend=1 00:27:21.542 --rc geninfo_all_blocks=1 00:27:21.542 --rc geninfo_unexecuted_blocks=1 00:27:21.542 00:27:21.542 ' 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:21.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.542 --rc genhtml_branch_coverage=1 00:27:21.542 --rc genhtml_function_coverage=1 00:27:21.542 --rc genhtml_legend=1 00:27:21.542 --rc geninfo_all_blocks=1 00:27:21.542 --rc geninfo_unexecuted_blocks=1 00:27:21.542 00:27:21.542 ' 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:21.542 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:27:21.542 19:21:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:29.670 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:29.670 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:29.670 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:29.670 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # rdma_device_init 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@530 -- # allocate_nic_ips 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:29.670 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:27:29.671 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:29.671 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:29.671 altname enp217s0f0np0 00:27:29.671 altname ens818f0np0 00:27:29.671 inet 192.168.100.8/24 scope global mlx_0_0 00:27:29.671 valid_lft forever preferred_lft forever 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:27:29.671 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:29.671 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:29.671 altname enp217s0f1np1 00:27:29.671 altname ens818f1np1 00:27:29.671 inet 192.168.100.9/24 scope global mlx_0_1 00:27:29.671 valid_lft forever preferred_lft forever 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:27:29.671 192.168.100.9' 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:27:29.671 192.168.100.9' 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # head -n 1 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:27:29.671 192.168.100.9' 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # tail -n +2 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # head -n 1 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=414302 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 414302 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 414302 ']' 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:29.671 19:22:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:29.671 [2024-12-13 19:22:03.042729] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:27:29.671 [2024-12-13 19:22:03.042791] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.671 [2024-12-13 19:22:03.136561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:29.671 [2024-12-13 19:22:03.160574] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:29.671 [2024-12-13 19:22:03.160612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:29.671 [2024-12-13 19:22:03.160622] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:29.671 [2024-12-13 19:22:03.160630] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:29.671 [2024-12-13 19:22:03.160636] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:29.671 [2024-12-13 19:22:03.162276] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.671 [2024-12-13 19:22:03.162389] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:29.671 [2024-12-13 19:22:03.162472] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.671 [2024-12-13 19:22:03.162474] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:29.671 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:29.671 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:27:29.671 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:29.671 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:29.671 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:29.671 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:29.671 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:29.671 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.671 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:29.672 [2024-12-13 19:22:03.341341] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x740540/0x7449f0) succeed. 00:27:29.672 [2024-12-13 19:22:03.350604] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x741b80/0x786090) succeed. 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:29.672 Malloc0 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:29.672 [2024-12-13 19:22:03.530664] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:29.672 [ 00:27:29.672 { 00:27:29.672 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:29.672 "subtype": "Discovery", 00:27:29.672 "listen_addresses": [], 00:27:29.672 "allow_any_host": true, 00:27:29.672 "hosts": [] 00:27:29.672 }, 00:27:29.672 { 00:27:29.672 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:29.672 "subtype": "NVMe", 00:27:29.672 "listen_addresses": [ 00:27:29.672 { 00:27:29.672 "trtype": "RDMA", 00:27:29.672 "adrfam": "IPv4", 00:27:29.672 "traddr": "192.168.100.8", 00:27:29.672 "trsvcid": "4420" 00:27:29.672 } 00:27:29.672 ], 00:27:29.672 "allow_any_host": true, 00:27:29.672 "hosts": [], 00:27:29.672 "serial_number": "SPDK00000000000001", 00:27:29.672 "model_number": "SPDK bdev Controller", 00:27:29.672 "max_namespaces": 2, 00:27:29.672 "min_cntlid": 1, 00:27:29.672 "max_cntlid": 65519, 00:27:29.672 "namespaces": [ 00:27:29.672 { 00:27:29.672 "nsid": 1, 00:27:29.672 "bdev_name": "Malloc0", 00:27:29.672 "name": "Malloc0", 00:27:29.672 "nguid": "7668242B6CA940F1894ED7A31B189C2F", 00:27:29.672 "uuid": "7668242b-6ca9-40f1-894e-d7a31b189c2f" 00:27:29.672 } 00:27:29.672 ] 00:27:29.672 } 00:27:29.672 ] 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=414523 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:29.672 Malloc1 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:29.672 [ 00:27:29.672 { 00:27:29.672 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:29.672 "subtype": "Discovery", 00:27:29.672 "listen_addresses": [], 00:27:29.672 "allow_any_host": true, 00:27:29.672 "hosts": [] 00:27:29.672 }, 00:27:29.672 { 00:27:29.672 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:29.672 "subtype": "NVMe", 00:27:29.672 "listen_addresses": [ 00:27:29.672 { 00:27:29.672 "trtype": "RDMA", 00:27:29.672 "adrfam": "IPv4", 00:27:29.672 "traddr": "192.168.100.8", 00:27:29.672 "trsvcid": "4420" 00:27:29.672 } 00:27:29.672 ], 00:27:29.672 "allow_any_host": true, 00:27:29.672 "hosts": [], 00:27:29.672 "serial_number": "SPDK00000000000001", 00:27:29.672 "model_number": "SPDK bdev Controller", 00:27:29.672 "max_namespaces": 2, 00:27:29.672 "min_cntlid": 1, 00:27:29.672 "max_cntlid": 65519, 00:27:29.672 "namespaces": [ 00:27:29.672 { 00:27:29.672 "nsid": 1, 00:27:29.672 "bdev_name": "Malloc0", 00:27:29.672 "name": "Malloc0", 00:27:29.672 "nguid": "7668242B6CA940F1894ED7A31B189C2F", 00:27:29.672 "uuid": "7668242b-6ca9-40f1-894e-d7a31b189c2f" 00:27:29.672 }, 00:27:29.672 { 00:27:29.672 "nsid": 2, 00:27:29.672 "bdev_name": "Malloc1", 00:27:29.672 "name": "Malloc1", 00:27:29.672 "nguid": "5B4DBCAF1C25404E97341CC69D675107", 00:27:29.672 "uuid": "5b4dbcaf-1c25-404e-9734-1cc69d675107" 00:27:29.672 } 00:27:29.672 ] 00:27:29.672 } 00:27:29.672 ] 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 414523 00:27:29.672 Asynchronous Event Request test 00:27:29.672 Attaching to 192.168.100.8 00:27:29.672 Attached to 192.168.100.8 00:27:29.672 Registering asynchronous event callbacks... 00:27:29.672 Starting namespace attribute notice tests for all controllers... 00:27:29.672 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:29.672 aer_cb - Changed Namespace 00:27:29.672 Cleaning up... 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:27:29.672 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:27:29.673 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:27:29.673 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:27:29.673 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:29.673 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:27:29.673 rmmod nvme_rdma 00:27:29.673 rmmod nvme_fabrics 00:27:29.673 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:29.673 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:27:29.673 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:27:29.673 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 414302 ']' 00:27:29.673 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 414302 00:27:29.673 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 414302 ']' 00:27:29.673 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 414302 00:27:29.673 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:27:29.673 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:29.673 19:22:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 414302 00:27:29.932 19:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:29.932 19:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:29.932 19:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 414302' 00:27:29.932 killing process with pid 414302 00:27:29.932 19:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 414302 00:27:29.932 19:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 414302 00:27:29.932 19:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:29.932 19:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:27:29.932 00:27:29.932 real 0m8.735s 00:27:29.932 user 0m6.340s 00:27:29.932 sys 0m6.122s 00:27:29.932 19:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:29.932 19:22:04 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:29.932 ************************************ 00:27:29.932 END TEST nvmf_aer 00:27:29.932 ************************************ 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.192 ************************************ 00:27:30.192 START TEST nvmf_async_init 00:27:30.192 ************************************ 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:27:30.192 * Looking for test storage... 00:27:30.192 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:30.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.192 --rc genhtml_branch_coverage=1 00:27:30.192 --rc genhtml_function_coverage=1 00:27:30.192 --rc genhtml_legend=1 00:27:30.192 --rc geninfo_all_blocks=1 00:27:30.192 --rc geninfo_unexecuted_blocks=1 00:27:30.192 00:27:30.192 ' 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:30.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.192 --rc genhtml_branch_coverage=1 00:27:30.192 --rc genhtml_function_coverage=1 00:27:30.192 --rc genhtml_legend=1 00:27:30.192 --rc geninfo_all_blocks=1 00:27:30.192 --rc geninfo_unexecuted_blocks=1 00:27:30.192 00:27:30.192 ' 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:30.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.192 --rc genhtml_branch_coverage=1 00:27:30.192 --rc genhtml_function_coverage=1 00:27:30.192 --rc genhtml_legend=1 00:27:30.192 --rc geninfo_all_blocks=1 00:27:30.192 --rc geninfo_unexecuted_blocks=1 00:27:30.192 00:27:30.192 ' 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:30.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.192 --rc genhtml_branch_coverage=1 00:27:30.192 --rc genhtml_function_coverage=1 00:27:30.192 --rc genhtml_legend=1 00:27:30.192 --rc geninfo_all_blocks=1 00:27:30.192 --rc geninfo_unexecuted_blocks=1 00:27:30.192 00:27:30.192 ' 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:30.192 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:30.452 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:30.452 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8e4a977edb1d4531ab4787ced881f67c 00:27:30.453 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:30.453 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:27:30.453 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:30.453 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:30.453 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:30.453 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:30.453 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.453 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:30.453 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.453 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:30.453 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:30.453 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:27:30.453 19:22:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:38.578 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:38.578 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:27:38.578 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:38.578 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:38.578 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:38.578 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:38.578 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:38.578 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:27:38.578 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:38.578 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:27:38.578 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:27:38.578 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:27:38.578 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:27:38.578 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:27:38.578 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:27:38.578 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:38.578 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:38.578 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:38.578 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:38.578 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:38.578 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:38.579 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:38.579 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:38.579 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:38.579 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # rdma_device_init 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@530 -- # allocate_nic_ips 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:27:38.579 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:38.579 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:38.579 altname enp217s0f0np0 00:27:38.579 altname ens818f0np0 00:27:38.579 inet 192.168.100.8/24 scope global mlx_0_0 00:27:38.579 valid_lft forever preferred_lft forever 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:27:38.579 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:38.579 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:38.579 altname enp217s0f1np1 00:27:38.579 altname ens818f1np1 00:27:38.579 inet 192.168.100.9/24 scope global mlx_0_1 00:27:38.579 valid_lft forever preferred_lft forever 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:38.579 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:27:38.580 192.168.100.9' 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:27:38.580 192.168.100.9' 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # head -n 1 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:27:38.580 192.168.100.9' 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # tail -n +2 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # head -n 1 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=418005 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 418005 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 418005 ']' 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:38.580 19:22:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:38.580 [2024-12-13 19:22:11.880559] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:27:38.580 [2024-12-13 19:22:11.880609] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:38.580 [2024-12-13 19:22:11.956678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.580 [2024-12-13 19:22:11.977817] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:38.580 [2024-12-13 19:22:11.977850] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:38.580 [2024-12-13 19:22:11.977859] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:38.580 [2024-12-13 19:22:11.977867] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:38.580 [2024-12-13 19:22:11.977874] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:38.580 [2024-12-13 19:22:11.978477] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:38.580 [2024-12-13 19:22:12.134679] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe81380/0xe85830) succeed. 00:27:38.580 [2024-12-13 19:22:12.143152] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe827e0/0xec6ed0) succeed. 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:38.580 null0 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8e4a977edb1d4531ab4787ced881f67c 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:38.580 [2024-12-13 19:22:12.221173] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:38.580 nvme0n1 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.580 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:38.580 [ 00:27:38.580 { 00:27:38.580 "name": "nvme0n1", 00:27:38.580 "aliases": [ 00:27:38.580 "8e4a977e-db1d-4531-ab47-87ced881f67c" 00:27:38.580 ], 00:27:38.580 "product_name": "NVMe disk", 00:27:38.580 "block_size": 512, 00:27:38.580 "num_blocks": 2097152, 00:27:38.580 "uuid": "8e4a977e-db1d-4531-ab47-87ced881f67c", 00:27:38.580 "numa_id": 1, 00:27:38.580 "assigned_rate_limits": { 00:27:38.580 "rw_ios_per_sec": 0, 00:27:38.580 "rw_mbytes_per_sec": 0, 00:27:38.580 "r_mbytes_per_sec": 0, 00:27:38.580 "w_mbytes_per_sec": 0 00:27:38.580 }, 00:27:38.580 "claimed": false, 00:27:38.580 "zoned": false, 00:27:38.580 "supported_io_types": { 00:27:38.580 "read": true, 00:27:38.580 "write": true, 00:27:38.580 "unmap": false, 00:27:38.580 "flush": true, 00:27:38.580 "reset": true, 00:27:38.581 "nvme_admin": true, 00:27:38.581 "nvme_io": true, 00:27:38.581 "nvme_io_md": false, 00:27:38.581 "write_zeroes": true, 00:27:38.581 "zcopy": false, 00:27:38.581 "get_zone_info": false, 00:27:38.581 "zone_management": false, 00:27:38.581 "zone_append": false, 00:27:38.581 "compare": true, 00:27:38.581 "compare_and_write": true, 00:27:38.581 "abort": true, 00:27:38.581 "seek_hole": false, 00:27:38.581 "seek_data": false, 00:27:38.581 "copy": true, 00:27:38.581 "nvme_iov_md": false 00:27:38.581 }, 00:27:38.581 "memory_domains": [ 00:27:38.581 { 00:27:38.581 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:27:38.581 "dma_device_type": 0 00:27:38.581 } 00:27:38.581 ], 00:27:38.581 "driver_specific": { 00:27:38.581 "nvme": [ 00:27:38.581 { 00:27:38.581 "trid": { 00:27:38.581 "trtype": "RDMA", 00:27:38.581 "adrfam": "IPv4", 00:27:38.581 "traddr": "192.168.100.8", 00:27:38.581 "trsvcid": "4420", 00:27:38.581 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:38.581 }, 00:27:38.581 "ctrlr_data": { 00:27:38.581 "cntlid": 1, 00:27:38.581 "vendor_id": "0x8086", 00:27:38.581 "model_number": "SPDK bdev Controller", 00:27:38.581 "serial_number": "00000000000000000000", 00:27:38.581 "firmware_revision": "25.01", 00:27:38.581 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:38.581 "oacs": { 00:27:38.581 "security": 0, 00:27:38.581 "format": 0, 00:27:38.581 "firmware": 0, 00:27:38.581 "ns_manage": 0 00:27:38.581 }, 00:27:38.581 "multi_ctrlr": true, 00:27:38.581 "ana_reporting": false 00:27:38.581 }, 00:27:38.581 "vs": { 00:27:38.581 "nvme_version": "1.3" 00:27:38.581 }, 00:27:38.581 "ns_data": { 00:27:38.581 "id": 1, 00:27:38.581 "can_share": true 00:27:38.581 } 00:27:38.581 } 00:27:38.581 ], 00:27:38.581 "mp_policy": "active_passive" 00:27:38.581 } 00:27:38.581 } 00:27:38.581 ] 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:38.581 [2024-12-13 19:22:12.344704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:38.581 [2024-12-13 19:22:12.360865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:27:38.581 [2024-12-13 19:22:12.389488] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:38.581 [ 00:27:38.581 { 00:27:38.581 "name": "nvme0n1", 00:27:38.581 "aliases": [ 00:27:38.581 "8e4a977e-db1d-4531-ab47-87ced881f67c" 00:27:38.581 ], 00:27:38.581 "product_name": "NVMe disk", 00:27:38.581 "block_size": 512, 00:27:38.581 "num_blocks": 2097152, 00:27:38.581 "uuid": "8e4a977e-db1d-4531-ab47-87ced881f67c", 00:27:38.581 "numa_id": 1, 00:27:38.581 "assigned_rate_limits": { 00:27:38.581 "rw_ios_per_sec": 0, 00:27:38.581 "rw_mbytes_per_sec": 0, 00:27:38.581 "r_mbytes_per_sec": 0, 00:27:38.581 "w_mbytes_per_sec": 0 00:27:38.581 }, 00:27:38.581 "claimed": false, 00:27:38.581 "zoned": false, 00:27:38.581 "supported_io_types": { 00:27:38.581 "read": true, 00:27:38.581 "write": true, 00:27:38.581 "unmap": false, 00:27:38.581 "flush": true, 00:27:38.581 "reset": true, 00:27:38.581 "nvme_admin": true, 00:27:38.581 "nvme_io": true, 00:27:38.581 "nvme_io_md": false, 00:27:38.581 "write_zeroes": true, 00:27:38.581 "zcopy": false, 00:27:38.581 "get_zone_info": false, 00:27:38.581 "zone_management": false, 00:27:38.581 "zone_append": false, 00:27:38.581 "compare": true, 00:27:38.581 "compare_and_write": true, 00:27:38.581 "abort": true, 00:27:38.581 "seek_hole": false, 00:27:38.581 "seek_data": false, 00:27:38.581 "copy": true, 00:27:38.581 "nvme_iov_md": false 00:27:38.581 }, 00:27:38.581 "memory_domains": [ 00:27:38.581 { 00:27:38.581 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:27:38.581 "dma_device_type": 0 00:27:38.581 } 00:27:38.581 ], 00:27:38.581 "driver_specific": { 00:27:38.581 "nvme": [ 00:27:38.581 { 00:27:38.581 "trid": { 00:27:38.581 "trtype": "RDMA", 00:27:38.581 "adrfam": "IPv4", 00:27:38.581 "traddr": "192.168.100.8", 00:27:38.581 "trsvcid": "4420", 00:27:38.581 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:38.581 }, 00:27:38.581 "ctrlr_data": { 00:27:38.581 "cntlid": 2, 00:27:38.581 "vendor_id": "0x8086", 00:27:38.581 "model_number": "SPDK bdev Controller", 00:27:38.581 "serial_number": "00000000000000000000", 00:27:38.581 "firmware_revision": "25.01", 00:27:38.581 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:38.581 "oacs": { 00:27:38.581 "security": 0, 00:27:38.581 "format": 0, 00:27:38.581 "firmware": 0, 00:27:38.581 "ns_manage": 0 00:27:38.581 }, 00:27:38.581 "multi_ctrlr": true, 00:27:38.581 "ana_reporting": false 00:27:38.581 }, 00:27:38.581 "vs": { 00:27:38.581 "nvme_version": "1.3" 00:27:38.581 }, 00:27:38.581 "ns_data": { 00:27:38.581 "id": 1, 00:27:38.581 "can_share": true 00:27:38.581 } 00:27:38.581 } 00:27:38.581 ], 00:27:38.581 "mp_policy": "active_passive" 00:27:38.581 } 00:27:38.581 } 00:27:38.581 ] 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.L1OzMqWMEu 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.L1OzMqWMEu 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.L1OzMqWMEu 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:38.581 [2024-12-13 19:22:12.484228] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:38.581 [2024-12-13 19:22:12.504276] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:38.581 nvme0n1 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.581 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:38.581 [ 00:27:38.581 { 00:27:38.581 "name": "nvme0n1", 00:27:38.581 "aliases": [ 00:27:38.581 "8e4a977e-db1d-4531-ab47-87ced881f67c" 00:27:38.581 ], 00:27:38.581 "product_name": "NVMe disk", 00:27:38.581 "block_size": 512, 00:27:38.581 "num_blocks": 2097152, 00:27:38.581 "uuid": "8e4a977e-db1d-4531-ab47-87ced881f67c", 00:27:38.581 "numa_id": 1, 00:27:38.581 "assigned_rate_limits": { 00:27:38.581 "rw_ios_per_sec": 0, 00:27:38.581 "rw_mbytes_per_sec": 0, 00:27:38.581 "r_mbytes_per_sec": 0, 00:27:38.581 "w_mbytes_per_sec": 0 00:27:38.581 }, 00:27:38.581 "claimed": false, 00:27:38.581 "zoned": false, 00:27:38.581 "supported_io_types": { 00:27:38.581 "read": true, 00:27:38.581 "write": true, 00:27:38.581 "unmap": false, 00:27:38.581 "flush": true, 00:27:38.581 "reset": true, 00:27:38.581 "nvme_admin": true, 00:27:38.581 "nvme_io": true, 00:27:38.581 "nvme_io_md": false, 00:27:38.581 "write_zeroes": true, 00:27:38.581 "zcopy": false, 00:27:38.581 "get_zone_info": false, 00:27:38.582 "zone_management": false, 00:27:38.582 "zone_append": false, 00:27:38.582 "compare": true, 00:27:38.582 "compare_and_write": true, 00:27:38.582 "abort": true, 00:27:38.582 "seek_hole": false, 00:27:38.582 "seek_data": false, 00:27:38.582 "copy": true, 00:27:38.582 "nvme_iov_md": false 00:27:38.582 }, 00:27:38.582 "memory_domains": [ 00:27:38.582 { 00:27:38.582 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:27:38.582 "dma_device_type": 0 00:27:38.582 } 00:27:38.582 ], 00:27:38.582 "driver_specific": { 00:27:38.582 "nvme": [ 00:27:38.582 { 00:27:38.582 "trid": { 00:27:38.582 "trtype": "RDMA", 00:27:38.582 "adrfam": "IPv4", 00:27:38.582 "traddr": "192.168.100.8", 00:27:38.582 "trsvcid": "4421", 00:27:38.582 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:38.582 }, 00:27:38.582 "ctrlr_data": { 00:27:38.582 "cntlid": 3, 00:27:38.582 "vendor_id": "0x8086", 00:27:38.582 "model_number": "SPDK bdev Controller", 00:27:38.582 "serial_number": "00000000000000000000", 00:27:38.582 "firmware_revision": "25.01", 00:27:38.582 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:38.582 "oacs": { 00:27:38.582 "security": 0, 00:27:38.582 "format": 0, 00:27:38.582 "firmware": 0, 00:27:38.582 "ns_manage": 0 00:27:38.582 }, 00:27:38.582 "multi_ctrlr": true, 00:27:38.582 "ana_reporting": false 00:27:38.582 }, 00:27:38.582 "vs": { 00:27:38.582 "nvme_version": "1.3" 00:27:38.582 }, 00:27:38.582 "ns_data": { 00:27:38.582 "id": 1, 00:27:38.582 "can_share": true 00:27:38.582 } 00:27:38.582 } 00:27:38.582 ], 00:27:38.582 "mp_policy": "active_passive" 00:27:38.582 } 00:27:38.582 } 00:27:38.582 ] 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.L1OzMqWMEu 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:27:38.582 rmmod nvme_rdma 00:27:38.582 rmmod nvme_fabrics 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 418005 ']' 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 418005 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 418005 ']' 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 418005 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 418005 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 418005' 00:27:38.582 killing process with pid 418005 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 418005 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 418005 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:27:38.582 00:27:38.582 real 0m8.586s 00:27:38.582 user 0m3.208s 00:27:38.582 sys 0m6.010s 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:38.582 19:22:12 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:38.582 ************************************ 00:27:38.582 END TEST nvmf_async_init 00:27:38.582 ************************************ 00:27:38.842 19:22:12 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:27:38.842 19:22:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:38.842 19:22:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:38.842 19:22:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.842 ************************************ 00:27:38.842 START TEST dma 00:27:38.842 ************************************ 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:27:38.842 * Looking for test storage... 00:27:38.842 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:27:38.842 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:39.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.102 --rc genhtml_branch_coverage=1 00:27:39.102 --rc genhtml_function_coverage=1 00:27:39.102 --rc genhtml_legend=1 00:27:39.102 --rc geninfo_all_blocks=1 00:27:39.102 --rc geninfo_unexecuted_blocks=1 00:27:39.102 00:27:39.102 ' 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:39.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.102 --rc genhtml_branch_coverage=1 00:27:39.102 --rc genhtml_function_coverage=1 00:27:39.102 --rc genhtml_legend=1 00:27:39.102 --rc geninfo_all_blocks=1 00:27:39.102 --rc geninfo_unexecuted_blocks=1 00:27:39.102 00:27:39.102 ' 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:39.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.102 --rc genhtml_branch_coverage=1 00:27:39.102 --rc genhtml_function_coverage=1 00:27:39.102 --rc genhtml_legend=1 00:27:39.102 --rc geninfo_all_blocks=1 00:27:39.102 --rc geninfo_unexecuted_blocks=1 00:27:39.102 00:27:39.102 ' 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:39.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.102 --rc genhtml_branch_coverage=1 00:27:39.102 --rc genhtml_function_coverage=1 00:27:39.102 --rc genhtml_legend=1 00:27:39.102 --rc geninfo_all_blocks=1 00:27:39.102 --rc geninfo_unexecuted_blocks=1 00:27:39.102 00:27:39.102 ' 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:39.102 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:27:39.102 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:39.103 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:39.103 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:39.103 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:39.103 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.103 19:22:13 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:39.103 19:22:13 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.103 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:39.103 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:39.103 19:22:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:27:39.103 19:22:13 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:47.238 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:47.238 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:47.238 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:47.239 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:47.239 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # is_hw=yes 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # rdma_device_init 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:27:47.239 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:47.239 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:47.239 altname enp217s0f0np0 00:27:47.239 altname ens818f0np0 00:27:47.239 inet 192.168.100.8/24 scope global mlx_0_0 00:27:47.239 valid_lft forever preferred_lft forever 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:27:47.239 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:47.239 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:47.239 altname enp217s0f1np1 00:27:47.239 altname ens818f1np1 00:27:47.239 inet 192.168.100.9/24 scope global mlx_0_1 00:27:47.239 valid_lft forever preferred_lft forever 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # return 0 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:27:47.239 192.168.100.9' 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:27:47.239 192.168.100.9' 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # head -n 1 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:27:47.239 192.168.100.9' 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # head -n 1 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # tail -n +2 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:47.239 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@509 -- # nvmfpid=421468 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@510 -- # waitforlisten 421468 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # '[' -z 421468 ']' 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:47.240 [2024-12-13 19:22:20.532715] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:27:47.240 [2024-12-13 19:22:20.532762] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.240 [2024-12-13 19:22:20.623615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:47.240 [2024-12-13 19:22:20.645120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.240 [2024-12-13 19:22:20.645169] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.240 [2024-12-13 19:22:20.645179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.240 [2024-12-13 19:22:20.645187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.240 [2024-12-13 19:22:20.645197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.240 [2024-12-13 19:22:20.646521] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.240 [2024-12-13 19:22:20.646521] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@868 -- # return 0 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:47.240 [2024-12-13 19:22:20.820445] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x845da0/0x84a250) succeed. 00:27:47.240 [2024-12-13 19:22:20.829395] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8472a0/0x88b8f0) succeed. 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:47.240 Malloc0 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:47.240 [2024-12-13 19:22:20.986297] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # config=() 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # local subsystem config 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:47.240 { 00:27:47.240 "params": { 00:27:47.240 "name": "Nvme$subsystem", 00:27:47.240 "trtype": "$TEST_TRANSPORT", 00:27:47.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.240 "adrfam": "ipv4", 00:27:47.240 "trsvcid": "$NVMF_PORT", 00:27:47.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.240 "hdgst": ${hdgst:-false}, 00:27:47.240 "ddgst": ${ddgst:-false} 00:27:47.240 }, 00:27:47.240 "method": "bdev_nvme_attach_controller" 00:27:47.240 } 00:27:47.240 EOF 00:27:47.240 )") 00:27:47.240 19:22:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # cat 00:27:47.240 19:22:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@584 -- # jq . 00:27:47.240 19:22:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@585 -- # IFS=, 00:27:47.240 19:22:21 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:47.240 "params": { 00:27:47.240 "name": "Nvme0", 00:27:47.240 "trtype": "rdma", 00:27:47.240 "traddr": "192.168.100.8", 00:27:47.240 "adrfam": "ipv4", 00:27:47.240 "trsvcid": "4420", 00:27:47.240 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:47.240 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:47.240 "hdgst": false, 00:27:47.240 "ddgst": false 00:27:47.240 }, 00:27:47.240 "method": "bdev_nvme_attach_controller" 00:27:47.240 }' 00:27:47.240 [2024-12-13 19:22:21.036756] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:27:47.240 [2024-12-13 19:22:21.036805] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid421496 ] 00:27:47.240 [2024-12-13 19:22:21.129511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:47.240 [2024-12-13 19:22:21.153288] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:47.240 [2024-12-13 19:22:21.153288] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:52.514 bdev Nvme0n1 reports 1 memory domains 00:27:52.514 bdev Nvme0n1 supports RDMA memory domain 00:27:52.514 Initialization complete, running randrw IO for 5 sec on 2 cores 00:27:52.514 ========================================================================== 00:27:52.514 Latency [us] 00:27:52.514 IOPS MiB/s Average min max 00:27:52.514 Core 2: 21095.31 82.40 757.62 242.97 8909.05 00:27:52.514 Core 3: 20943.77 81.81 763.15 242.73 8865.90 00:27:52.514 ========================================================================== 00:27:52.514 Total : 42039.08 164.22 760.37 242.73 8909.05 00:27:52.514 00:27:52.514 Total operations: 210284, translate 210284 pull_push 0 memzero 0 00:27:52.514 19:22:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:27:52.514 19:22:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:27:52.514 19:22:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:27:52.514 [2024-12-13 19:22:26.565264] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:27:52.514 [2024-12-13 19:22:26.565314] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid422554 ] 00:27:52.514 [2024-12-13 19:22:26.658411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:52.514 [2024-12-13 19:22:26.681235] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:52.514 [2024-12-13 19:22:26.681237] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:57.787 bdev Malloc0 reports 2 memory domains 00:27:57.787 bdev Malloc0 doesn't support RDMA memory domain 00:27:57.787 Initialization complete, running randrw IO for 5 sec on 2 cores 00:27:57.787 ========================================================================== 00:27:57.787 Latency [us] 00:27:57.787 IOPS MiB/s Average min max 00:27:57.787 Core 2: 14029.04 54.80 1139.80 416.54 1864.26 00:27:57.787 Core 3: 14214.36 55.52 1124.90 414.32 1891.01 00:27:57.787 ========================================================================== 00:27:57.787 Total : 28243.40 110.33 1132.30 414.32 1891.01 00:27:57.787 00:27:57.787 Total operations: 141271, translate 0 pull_push 565084 memzero 0 00:27:57.787 19:22:31 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:27:57.787 19:22:31 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:27:57.787 19:22:31 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:27:57.787 19:22:31 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:27:57.787 Ignoring -M option 00:27:57.787 [2024-12-13 19:22:31.995333] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:27:57.787 [2024-12-13 19:22:31.995385] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid423354 ] 00:27:57.787 [2024-12-13 19:22:32.089088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:57.787 [2024-12-13 19:22:32.112200] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:57.787 [2024-12-13 19:22:32.112202] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:04.356 bdev 357b68c5-0c75-4a2a-afd1-7e6e5e148085 reports 1 memory domains 00:28:04.356 bdev 357b68c5-0c75-4a2a-afd1-7e6e5e148085 supports RDMA memory domain 00:28:04.356 Initialization complete, running randread IO for 5 sec on 2 cores 00:28:04.356 ========================================================================== 00:28:04.356 Latency [us] 00:28:04.356 IOPS MiB/s Average min max 00:28:04.356 Core 2: 75151.70 293.56 212.16 86.68 3973.96 00:28:04.356 Core 3: 71669.35 279.96 222.44 89.52 3922.60 00:28:04.356 ========================================================================== 00:28:04.356 Total : 146821.05 573.52 217.18 86.68 3973.96 00:28:04.356 00:28:04.356 Total operations: 734202, translate 0 pull_push 0 memzero 734202 00:28:04.356 19:22:37 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:28:04.356 [2024-12-13 19:22:37.661838] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:05.734 Initializing NVMe Controllers 00:28:05.734 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:28:05.735 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:05.735 Initialization complete. Launching workers. 00:28:05.735 ======================================================== 00:28:05.735 Latency(us) 00:28:05.735 Device Information : IOPS MiB/s Average min max 00:28:05.735 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 1998.90 7.81 7972.07 6989.34 8972.78 00:28:05.735 ======================================================== 00:28:05.735 Total : 1998.90 7.81 7972.07 6989.34 8972.78 00:28:05.735 00:28:05.735 19:22:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:28:05.735 19:22:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:28:05.735 19:22:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:28:05.735 19:22:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:28:05.735 [2024-12-13 19:22:40.000996] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:05.735 [2024-12-13 19:22:40.001073] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid424685 ] 00:28:05.735 [2024-12-13 19:22:40.095656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:05.994 [2024-12-13 19:22:40.120366] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:05.994 [2024-12-13 19:22:40.120367] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:11.266 bdev 714fa253-bbbe-42b3-b3b9-543997117ffb reports 1 memory domains 00:28:11.266 bdev 714fa253-bbbe-42b3-b3b9-543997117ffb supports RDMA memory domain 00:28:11.266 Initialization complete, running randrw IO for 5 sec on 2 cores 00:28:11.266 ========================================================================== 00:28:11.266 Latency [us] 00:28:11.266 IOPS MiB/s Average min max 00:28:11.266 Core 2: 18584.80 72.60 860.05 20.02 13495.55 00:28:11.266 Core 3: 18915.86 73.89 845.06 25.63 13101.15 00:28:11.266 ========================================================================== 00:28:11.266 Total : 37500.66 146.49 852.49 20.02 13495.55 00:28:11.266 00:28:11.267 Total operations: 187582, translate 187476 pull_push 0 memzero 106 00:28:11.267 19:22:45 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:11.267 19:22:45 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:28:11.267 19:22:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:11.267 19:22:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:28:11.267 19:22:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:28:11.267 19:22:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:28:11.267 19:22:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:28:11.267 19:22:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:11.267 19:22:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:28:11.267 rmmod nvme_rdma 00:28:11.267 rmmod nvme_fabrics 00:28:11.267 19:22:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:11.267 19:22:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:28:11.267 19:22:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:28:11.267 19:22:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@517 -- # '[' -n 421468 ']' 00:28:11.267 19:22:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@518 -- # killprocess 421468 00:28:11.267 19:22:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # '[' -z 421468 ']' 00:28:11.267 19:22:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # kill -0 421468 00:28:11.267 19:22:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # uname 00:28:11.267 19:22:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:11.267 19:22:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 421468 00:28:11.526 19:22:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:11.526 19:22:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:11.526 19:22:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 421468' 00:28:11.526 killing process with pid 421468 00:28:11.526 19:22:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@973 -- # kill 421468 00:28:11.526 19:22:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@978 -- # wait 421468 00:28:11.785 19:22:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:11.785 19:22:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:28:11.785 00:28:11.785 real 0m32.909s 00:28:11.785 user 1m35.117s 00:28:11.785 sys 0m6.809s 00:28:11.785 19:22:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:11.785 19:22:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:28:11.785 ************************************ 00:28:11.785 END TEST dma 00:28:11.785 ************************************ 00:28:11.785 19:22:45 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:28:11.785 19:22:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:11.785 19:22:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:11.785 19:22:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.785 ************************************ 00:28:11.785 START TEST nvmf_identify 00:28:11.785 ************************************ 00:28:11.785 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:28:11.785 * Looking for test storage... 00:28:11.785 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:11.785 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:11.785 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:28:11.785 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:12.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.046 --rc genhtml_branch_coverage=1 00:28:12.046 --rc genhtml_function_coverage=1 00:28:12.046 --rc genhtml_legend=1 00:28:12.046 --rc geninfo_all_blocks=1 00:28:12.046 --rc geninfo_unexecuted_blocks=1 00:28:12.046 00:28:12.046 ' 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:12.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.046 --rc genhtml_branch_coverage=1 00:28:12.046 --rc genhtml_function_coverage=1 00:28:12.046 --rc genhtml_legend=1 00:28:12.046 --rc geninfo_all_blocks=1 00:28:12.046 --rc geninfo_unexecuted_blocks=1 00:28:12.046 00:28:12.046 ' 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:12.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.046 --rc genhtml_branch_coverage=1 00:28:12.046 --rc genhtml_function_coverage=1 00:28:12.046 --rc genhtml_legend=1 00:28:12.046 --rc geninfo_all_blocks=1 00:28:12.046 --rc geninfo_unexecuted_blocks=1 00:28:12.046 00:28:12.046 ' 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:12.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.046 --rc genhtml_branch_coverage=1 00:28:12.046 --rc genhtml_function_coverage=1 00:28:12.046 --rc genhtml_legend=1 00:28:12.046 --rc geninfo_all_blocks=1 00:28:12.046 --rc geninfo_unexecuted_blocks=1 00:28:12.046 00:28:12.046 ' 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:12.046 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:12.046 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:12.047 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:28:12.047 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.047 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:12.047 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:12.047 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:12.047 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.047 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.047 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.047 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:12.047 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:12.047 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:28:12.047 19:22:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:20.179 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:20.179 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:20.179 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:20.179 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # rdma_device_init 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@530 -- # allocate_nic_ips 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:28:20.179 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:28:20.180 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:20.180 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:20.180 altname enp217s0f0np0 00:28:20.180 altname ens818f0np0 00:28:20.180 inet 192.168.100.8/24 scope global mlx_0_0 00:28:20.180 valid_lft forever preferred_lft forever 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:28:20.180 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:20.180 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:20.180 altname enp217s0f1np1 00:28:20.180 altname ens818f1np1 00:28:20.180 inet 192.168.100.9/24 scope global mlx_0_1 00:28:20.180 valid_lft forever preferred_lft forever 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:28:20.180 192.168.100.9' 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:28:20.180 192.168.100.9' 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # head -n 1 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:28:20.180 192.168.100.9' 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # tail -n +2 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # head -n 1 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=429005 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 429005 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 429005 ']' 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:20.180 [2024-12-13 19:22:53.521237] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:20.180 [2024-12-13 19:22:53.521296] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.180 [2024-12-13 19:22:53.613662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:20.180 [2024-12-13 19:22:53.636320] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:20.180 [2024-12-13 19:22:53.636359] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:20.180 [2024-12-13 19:22:53.636368] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:20.180 [2024-12-13 19:22:53.636376] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:20.180 [2024-12-13 19:22:53.636399] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:20.180 [2024-12-13 19:22:53.637973] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.180 [2024-12-13 19:22:53.638084] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:20.180 [2024-12-13 19:22:53.638131] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.180 [2024-12-13 19:22:53.638132] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:20.180 [2024-12-13 19:22:53.768686] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa28540/0xa2c9f0) succeed. 00:28:20.180 [2024-12-13 19:22:53.777970] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa29b80/0xa6e090) succeed. 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:20.180 Malloc0 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:20.180 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.181 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:20.181 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.181 19:22:53 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:20.181 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.181 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:20.181 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.181 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:20.181 [2024-12-13 19:22:54.012916] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:20.181 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.181 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:28:20.181 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.181 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:20.181 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.181 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:20.181 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.181 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:20.181 [ 00:28:20.181 { 00:28:20.181 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:20.181 "subtype": "Discovery", 00:28:20.181 "listen_addresses": [ 00:28:20.181 { 00:28:20.181 "trtype": "RDMA", 00:28:20.181 "adrfam": "IPv4", 00:28:20.181 "traddr": "192.168.100.8", 00:28:20.181 "trsvcid": "4420" 00:28:20.181 } 00:28:20.181 ], 00:28:20.181 "allow_any_host": true, 00:28:20.181 "hosts": [] 00:28:20.181 }, 00:28:20.181 { 00:28:20.181 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:20.181 "subtype": "NVMe", 00:28:20.181 "listen_addresses": [ 00:28:20.181 { 00:28:20.181 "trtype": "RDMA", 00:28:20.181 "adrfam": "IPv4", 00:28:20.181 "traddr": "192.168.100.8", 00:28:20.181 "trsvcid": "4420" 00:28:20.181 } 00:28:20.181 ], 00:28:20.181 "allow_any_host": true, 00:28:20.181 "hosts": [], 00:28:20.181 "serial_number": "SPDK00000000000001", 00:28:20.181 "model_number": "SPDK bdev Controller", 00:28:20.181 "max_namespaces": 32, 00:28:20.181 "min_cntlid": 1, 00:28:20.181 "max_cntlid": 65519, 00:28:20.181 "namespaces": [ 00:28:20.181 { 00:28:20.181 "nsid": 1, 00:28:20.181 "bdev_name": "Malloc0", 00:28:20.181 "name": "Malloc0", 00:28:20.181 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:20.181 "eui64": "ABCDEF0123456789", 00:28:20.181 "uuid": "f677af1b-5f43-42fd-b03b-242aa1f34542" 00:28:20.181 } 00:28:20.181 ] 00:28:20.181 } 00:28:20.181 ] 00:28:20.181 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.181 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:20.181 [2024-12-13 19:22:54.074303] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:20.181 [2024-12-13 19:22:54.074342] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid429187 ] 00:28:20.181 [2024-12-13 19:22:54.137193] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:28:20.181 [2024-12-13 19:22:54.137252] nvme_rdma.c:2017:nvme_rdma_ctrlr_create_qpair: *DEBUG*: rqpair 0x2000003d7040, append_copy diabled 00:28:20.181 [2024-12-13 19:22:54.137274] nvme_rdma.c:2460:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:28:20.181 [2024-12-13 19:22:54.137287] nvme_rdma.c:1238:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:28:20.181 [2024-12-13 19:22:54.137292] nvme_rdma.c:1242:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:28:20.181 [2024-12-13 19:22:54.137325] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:28:20.181 [2024-12-13 19:22:54.148512] nvme_rdma.c: 459:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:28:20.181 [2024-12-13 19:22:54.158646] nvme_rdma.c:1124:nvme_rdma_connect_established: *DEBUG*: rc =0 00:28:20.181 [2024-12-13 19:22:54.158659] nvme_rdma.c:1129:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:28:20.181 [2024-12-13 19:22:54.158666] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158674] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158680] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158686] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158692] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158698] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158704] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158710] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158716] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158722] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158728] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158734] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158740] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158746] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158752] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158758] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158764] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158770] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158776] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158782] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158788] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158794] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158800] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158806] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158812] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158818] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158824] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158830] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158836] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158842] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158848] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158856] nvme_rdma.c:1143:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:28:20.181 [2024-12-13 19:22:54.158861] nvme_rdma.c:1146:nvme_rdma_connect_established: *DEBUG*: rc =0 00:28:20.181 [2024-12-13 19:22:54.158866] nvme_rdma.c:1151:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:28:20.181 [2024-12-13 19:22:54.158885] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.158898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd0c0 len:0x400 key:0x181d00 00:28:20.181 [2024-12-13 19:22:54.164046] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.181 [2024-12-13 19:22:54.164056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:28:20.181 [2024-12-13 19:22:54.164064] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.164071] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:20.181 [2024-12-13 19:22:54.164078] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:28:20.181 [2024-12-13 19:22:54.164085] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:28:20.181 [2024-12-13 19:22:54.164104] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.164113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.181 [2024-12-13 19:22:54.164143] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.181 [2024-12-13 19:22:54.164149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:28:20.181 [2024-12-13 19:22:54.164158] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:28:20.181 [2024-12-13 19:22:54.164164] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.164170] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:28:20.181 [2024-12-13 19:22:54.164179] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.181 [2024-12-13 19:22:54.164186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.181 [2024-12-13 19:22:54.164211] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.181 [2024-12-13 19:22:54.164216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:28:20.182 [2024-12-13 19:22:54.164223] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:28:20.182 [2024-12-13 19:22:54.164229] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x181d00 00:28:20.182 [2024-12-13 19:22:54.164236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:28:20.182 [2024-12-13 19:22:54.164244] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.182 [2024-12-13 19:22:54.164251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.182 [2024-12-13 19:22:54.164270] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.182 [2024-12-13 19:22:54.164276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:20.182 [2024-12-13 19:22:54.164284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:20.182 [2024-12-13 19:22:54.164290] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x181d00 00:28:20.182 [2024-12-13 19:22:54.164299] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.182 [2024-12-13 19:22:54.164306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.182 [2024-12-13 19:22:54.164323] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.182 [2024-12-13 19:22:54.164329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:20.182 [2024-12-13 19:22:54.164335] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:28:20.182 [2024-12-13 19:22:54.164341] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:28:20.182 [2024-12-13 19:22:54.164347] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x181d00 00:28:20.182 [2024-12-13 19:22:54.164354] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:20.182 [2024-12-13 19:22:54.164463] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:28:20.182 [2024-12-13 19:22:54.164469] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:20.182 [2024-12-13 19:22:54.164478] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.182 [2024-12-13 19:22:54.164486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.182 [2024-12-13 19:22:54.164509] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.182 [2024-12-13 19:22:54.164514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:20.182 [2024-12-13 19:22:54.164520] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:20.182 [2024-12-13 19:22:54.164526] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x181d00 00:28:20.182 [2024-12-13 19:22:54.164534] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.182 [2024-12-13 19:22:54.164542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.182 [2024-12-13 19:22:54.164559] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.182 [2024-12-13 19:22:54.164564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:28:20.182 [2024-12-13 19:22:54.164570] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:20.182 [2024-12-13 19:22:54.164576] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:28:20.182 [2024-12-13 19:22:54.164582] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x181d00 00:28:20.182 [2024-12-13 19:22:54.164589] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:28:20.182 [2024-12-13 19:22:54.164597] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:28:20.182 [2024-12-13 19:22:54.164609] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.182 [2024-12-13 19:22:54.164617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x181d00 00:28:20.182 [2024-12-13 19:22:54.164661] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.182 [2024-12-13 19:22:54.164667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:20.182 [2024-12-13 19:22:54.164676] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:28:20.182 [2024-12-13 19:22:54.164682] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:28:20.182 [2024-12-13 19:22:54.164687] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:28:20.182 [2024-12-13 19:22:54.164693] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:28:20.182 [2024-12-13 19:22:54.164699] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:28:20.182 [2024-12-13 19:22:54.164705] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:28:20.182 [2024-12-13 19:22:54.164711] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x181d00 00:28:20.182 [2024-12-13 19:22:54.164718] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:28:20.182 [2024-12-13 19:22:54.164725] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.182 [2024-12-13 19:22:54.164733] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.182 [2024-12-13 19:22:54.164754] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.182 [2024-12-13 19:22:54.164760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:20.182 [2024-12-13 19:22:54.164768] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce3c0 length 0x40 lkey 0x181d00 00:28:20.182 [2024-12-13 19:22:54.164775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.182 [2024-12-13 19:22:54.164782] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce500 length 0x40 lkey 0x181d00 00:28:20.182 [2024-12-13 19:22:54.164789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.182 [2024-12-13 19:22:54.164796] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.182 [2024-12-13 19:22:54.164803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.182 [2024-12-13 19:22:54.164810] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce780 length 0x40 lkey 0x181d00 00:28:20.182 [2024-12-13 19:22:54.164816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.182 [2024-12-13 19:22:54.164822] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:20.182 [2024-12-13 19:22:54.164828] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x181d00 00:28:20.182 [2024-12-13 19:22:54.164840] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:20.182 [2024-12-13 19:22:54.164848] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.182 [2024-12-13 19:22:54.164855] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.182 [2024-12-13 19:22:54.164873] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.182 [2024-12-13 19:22:54.164878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:28:20.182 [2024-12-13 19:22:54.164885] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:28:20.182 [2024-12-13 19:22:54.164890] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:28:20.182 [2024-12-13 19:22:54.164896] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x181d00 00:28:20.182 [2024-12-13 19:22:54.164905] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.182 [2024-12-13 19:22:54.164913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x181d00 00:28:20.182 [2024-12-13 19:22:54.164938] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.182 [2024-12-13 19:22:54.164943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:20.182 [2024-12-13 19:22:54.164950] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x181d00 00:28:20.182 [2024-12-13 19:22:54.164960] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:28:20.182 [2024-12-13 19:22:54.164980] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.182 [2024-12-13 19:22:54.164988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd000 len:0x400 key:0x181d00 00:28:20.182 [2024-12-13 19:22:54.164996] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce8c0 length 0x40 lkey 0x181d00 00:28:20.182 [2024-12-13 19:22:54.165003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.182 [2024-12-13 19:22:54.165017] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.182 [2024-12-13 19:22:54.165023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:20.182 [2024-12-13 19:22:54.165033] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cea00 length 0x40 lkey 0x181d00 00:28:20.182 [2024-12-13 19:22:54.165045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0xc00 key:0x181d00 00:28:20.182 [2024-12-13 19:22:54.165051] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x181d00 00:28:20.182 [2024-12-13 19:22:54.165058] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.182 [2024-12-13 19:22:54.165063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:20.183 [2024-12-13 19:22:54.165069] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x181d00 00:28:20.183 [2024-12-13 19:22:54.165079] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.183 [2024-12-13 19:22:54.165084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:20.183 [2024-12-13 19:22:54.165095] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce8c0 length 0x40 lkey 0x181d00 00:28:20.183 [2024-12-13 19:22:54.165103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd000 len:0x8 key:0x181d00 00:28:20.183 [2024-12-13 19:22:54.165109] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x181d00 00:28:20.183 [2024-12-13 19:22:54.165127] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.183 [2024-12-13 19:22:54.165133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:20.183 [2024-12-13 19:22:54.165144] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x181d00 00:28:20.183 ===================================================== 00:28:20.183 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:20.183 ===================================================== 00:28:20.183 Controller Capabilities/Features 00:28:20.183 ================================ 00:28:20.183 Vendor ID: 0000 00:28:20.183 Subsystem Vendor ID: 0000 00:28:20.183 Serial Number: .................... 00:28:20.183 Model Number: ........................................ 00:28:20.183 Firmware Version: 25.01 00:28:20.183 Recommended Arb Burst: 0 00:28:20.183 IEEE OUI Identifier: 00 00 00 00:28:20.183 Multi-path I/O 00:28:20.183 May have multiple subsystem ports: No 00:28:20.183 May have multiple controllers: No 00:28:20.183 Associated with SR-IOV VF: No 00:28:20.183 Max Data Transfer Size: 131072 00:28:20.183 Max Number of Namespaces: 0 00:28:20.183 Max Number of I/O Queues: 1024 00:28:20.183 NVMe Specification Version (VS): 1.3 00:28:20.183 NVMe Specification Version (Identify): 1.3 00:28:20.183 Maximum Queue Entries: 128 00:28:20.183 Contiguous Queues Required: Yes 00:28:20.183 Arbitration Mechanisms Supported 00:28:20.183 Weighted Round Robin: Not Supported 00:28:20.183 Vendor Specific: Not Supported 00:28:20.183 Reset Timeout: 15000 ms 00:28:20.183 Doorbell Stride: 4 bytes 00:28:20.183 NVM Subsystem Reset: Not Supported 00:28:20.183 Command Sets Supported 00:28:20.183 NVM Command Set: Supported 00:28:20.183 Boot Partition: Not Supported 00:28:20.183 Memory Page Size Minimum: 4096 bytes 00:28:20.183 Memory Page Size Maximum: 4096 bytes 00:28:20.183 Persistent Memory Region: Not Supported 00:28:20.183 Optional Asynchronous Events Supported 00:28:20.183 Namespace Attribute Notices: Not Supported 00:28:20.183 Firmware Activation Notices: Not Supported 00:28:20.183 ANA Change Notices: Not Supported 00:28:20.183 PLE Aggregate Log Change Notices: Not Supported 00:28:20.183 LBA Status Info Alert Notices: Not Supported 00:28:20.183 EGE Aggregate Log Change Notices: Not Supported 00:28:20.183 Normal NVM Subsystem Shutdown event: Not Supported 00:28:20.183 Zone Descriptor Change Notices: Not Supported 00:28:20.183 Discovery Log Change Notices: Supported 00:28:20.183 Controller Attributes 00:28:20.183 128-bit Host Identifier: Not Supported 00:28:20.183 Non-Operational Permissive Mode: Not Supported 00:28:20.183 NVM Sets: Not Supported 00:28:20.183 Read Recovery Levels: Not Supported 00:28:20.183 Endurance Groups: Not Supported 00:28:20.183 Predictable Latency Mode: Not Supported 00:28:20.183 Traffic Based Keep ALive: Not Supported 00:28:20.183 Namespace Granularity: Not Supported 00:28:20.183 SQ Associations: Not Supported 00:28:20.183 UUID List: Not Supported 00:28:20.183 Multi-Domain Subsystem: Not Supported 00:28:20.183 Fixed Capacity Management: Not Supported 00:28:20.183 Variable Capacity Management: Not Supported 00:28:20.183 Delete Endurance Group: Not Supported 00:28:20.183 Delete NVM Set: Not Supported 00:28:20.183 Extended LBA Formats Supported: Not Supported 00:28:20.183 Flexible Data Placement Supported: Not Supported 00:28:20.183 00:28:20.183 Controller Memory Buffer Support 00:28:20.183 ================================ 00:28:20.183 Supported: No 00:28:20.183 00:28:20.183 Persistent Memory Region Support 00:28:20.183 ================================ 00:28:20.183 Supported: No 00:28:20.183 00:28:20.183 Admin Command Set Attributes 00:28:20.183 ============================ 00:28:20.183 Security Send/Receive: Not Supported 00:28:20.183 Format NVM: Not Supported 00:28:20.183 Firmware Activate/Download: Not Supported 00:28:20.183 Namespace Management: Not Supported 00:28:20.183 Device Self-Test: Not Supported 00:28:20.183 Directives: Not Supported 00:28:20.183 NVMe-MI: Not Supported 00:28:20.183 Virtualization Management: Not Supported 00:28:20.183 Doorbell Buffer Config: Not Supported 00:28:20.183 Get LBA Status Capability: Not Supported 00:28:20.183 Command & Feature Lockdown Capability: Not Supported 00:28:20.183 Abort Command Limit: 1 00:28:20.183 Async Event Request Limit: 4 00:28:20.183 Number of Firmware Slots: N/A 00:28:20.183 Firmware Slot 1 Read-Only: N/A 00:28:20.183 Firmware Activation Without Reset: N/A 00:28:20.183 Multiple Update Detection Support: N/A 00:28:20.183 Firmware Update Granularity: No Information Provided 00:28:20.183 Per-Namespace SMART Log: No 00:28:20.183 Asymmetric Namespace Access Log Page: Not Supported 00:28:20.183 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:20.183 Command Effects Log Page: Not Supported 00:28:20.183 Get Log Page Extended Data: Supported 00:28:20.183 Telemetry Log Pages: Not Supported 00:28:20.183 Persistent Event Log Pages: Not Supported 00:28:20.183 Supported Log Pages Log Page: May Support 00:28:20.183 Commands Supported & Effects Log Page: Not Supported 00:28:20.183 Feature Identifiers & Effects Log Page:May Support 00:28:20.183 NVMe-MI Commands & Effects Log Page: May Support 00:28:20.183 Data Area 4 for Telemetry Log: Not Supported 00:28:20.183 Error Log Page Entries Supported: 128 00:28:20.183 Keep Alive: Not Supported 00:28:20.183 00:28:20.183 NVM Command Set Attributes 00:28:20.183 ========================== 00:28:20.183 Submission Queue Entry Size 00:28:20.183 Max: 1 00:28:20.183 Min: 1 00:28:20.183 Completion Queue Entry Size 00:28:20.183 Max: 1 00:28:20.183 Min: 1 00:28:20.183 Number of Namespaces: 0 00:28:20.183 Compare Command: Not Supported 00:28:20.183 Write Uncorrectable Command: Not Supported 00:28:20.183 Dataset Management Command: Not Supported 00:28:20.183 Write Zeroes Command: Not Supported 00:28:20.183 Set Features Save Field: Not Supported 00:28:20.183 Reservations: Not Supported 00:28:20.183 Timestamp: Not Supported 00:28:20.183 Copy: Not Supported 00:28:20.183 Volatile Write Cache: Not Present 00:28:20.183 Atomic Write Unit (Normal): 1 00:28:20.183 Atomic Write Unit (PFail): 1 00:28:20.183 Atomic Compare & Write Unit: 1 00:28:20.183 Fused Compare & Write: Supported 00:28:20.183 Scatter-Gather List 00:28:20.183 SGL Command Set: Supported 00:28:20.183 SGL Keyed: Supported 00:28:20.183 SGL Bit Bucket Descriptor: Not Supported 00:28:20.183 SGL Metadata Pointer: Not Supported 00:28:20.183 Oversized SGL: Not Supported 00:28:20.183 SGL Metadata Address: Not Supported 00:28:20.183 SGL Offset: Supported 00:28:20.183 Transport SGL Data Block: Not Supported 00:28:20.183 Replay Protected Memory Block: Not Supported 00:28:20.183 00:28:20.183 Firmware Slot Information 00:28:20.183 ========================= 00:28:20.183 Active slot: 0 00:28:20.183 00:28:20.183 00:28:20.183 Error Log 00:28:20.183 ========= 00:28:20.183 00:28:20.183 Active Namespaces 00:28:20.183 ================= 00:28:20.184 Discovery Log Page 00:28:20.184 ================== 00:28:20.184 Generation Counter: 2 00:28:20.184 Number of Records: 2 00:28:20.184 Record Format: 0 00:28:20.184 00:28:20.184 Discovery Log Entry 0 00:28:20.184 ---------------------- 00:28:20.184 Transport Type: 1 (RDMA) 00:28:20.184 Address Family: 1 (IPv4) 00:28:20.184 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:20.184 Entry Flags: 00:28:20.184 Duplicate Returned Information: 1 00:28:20.184 Explicit Persistent Connection Support for Discovery: 1 00:28:20.184 Transport Requirements: 00:28:20.184 Secure Channel: Not Required 00:28:20.184 Port ID: 0 (0x0000) 00:28:20.184 Controller ID: 65535 (0xffff) 00:28:20.184 Admin Max SQ Size: 128 00:28:20.184 Transport Service Identifier: 4420 00:28:20.184 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:20.184 Transport Address: 192.168.100.8 00:28:20.184 Transport Specific Address Subtype - RDMA 00:28:20.184 RDMA QP Service Type: 1 (Reliable Connected) 00:28:20.184 RDMA Provider Type: 1 (No provider specified) 00:28:20.184 RDMA CM Service: 1 (RDMA_CM) 00:28:20.184 Discovery Log Entry 1 00:28:20.184 ---------------------- 00:28:20.184 Transport Type: 1 (RDMA) 00:28:20.184 Address Family: 1 (IPv4) 00:28:20.184 Subsystem Type: 2 (NVM Subsystem) 00:28:20.184 Entry Flags: 00:28:20.184 Duplicate Returned Information: 0 00:28:20.184 Explicit Persistent Connection Support for Discovery: 0 00:28:20.184 Transport Requirements: 00:28:20.184 Secure Channel: Not Required 00:28:20.184 Port ID: 0 (0x0000) 00:28:20.184 Controller ID: 65535 (0xffff) 00:28:20.184 Admin Max SQ Size: [2024-12-13 19:22:54.165216] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:28:20.184 [2024-12-13 19:22:54.165226] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 34293 doesn't match qid 00:28:20.184 [2024-12-13 19:22:54.165240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32655 cdw0:b81f8140 sqhd:9e00 p:0 m:0 dnr:0 00:28:20.184 [2024-12-13 19:22:54.165246] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 34293 doesn't match qid 00:28:20.184 [2024-12-13 19:22:54.165254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32655 cdw0:b81f8140 sqhd:9e00 p:0 m:0 dnr:0 00:28:20.184 [2024-12-13 19:22:54.165261] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 34293 doesn't match qid 00:28:20.184 [2024-12-13 19:22:54.165268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32655 cdw0:b81f8140 sqhd:9e00 p:0 m:0 dnr:0 00:28:20.184 [2024-12-13 19:22:54.165275] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 34293 doesn't match qid 00:28:20.184 [2024-12-13 19:22:54.165282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32655 cdw0:b81f8140 sqhd:9e00 p:0 m:0 dnr:0 00:28:20.184 [2024-12-13 19:22:54.165293] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce780 length 0x40 lkey 0x181d00 00:28:20.184 [2024-12-13 19:22:54.165301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.184 [2024-12-13 19:22:54.165321] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.184 [2024-12-13 19:22:54.165326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:28:20.184 [2024-12-13 19:22:54.165335] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.184 [2024-12-13 19:22:54.165342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.184 [2024-12-13 19:22:54.165348] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x181d00 00:28:20.184 [2024-12-13 19:22:54.165365] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.184 [2024-12-13 19:22:54.165370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:20.184 [2024-12-13 19:22:54.165377] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:28:20.184 [2024-12-13 19:22:54.165383] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:28:20.184 [2024-12-13 19:22:54.165389] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x181d00 00:28:20.184 [2024-12-13 19:22:54.165397] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.184 [2024-12-13 19:22:54.165405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.184 [2024-12-13 19:22:54.165422] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.184 [2024-12-13 19:22:54.165428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:28:20.184 [2024-12-13 19:22:54.165434] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x181d00 00:28:20.184 [2024-12-13 19:22:54.165444] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.184 [2024-12-13 19:22:54.165451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.184 [2024-12-13 19:22:54.165473] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.184 [2024-12-13 19:22:54.165478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:28:20.184 [2024-12-13 19:22:54.165485] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x181d00 00:28:20.184 [2024-12-13 19:22:54.165493] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.184 [2024-12-13 19:22:54.165501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.184 [2024-12-13 19:22:54.165518] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.184 [2024-12-13 19:22:54.165524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:28:20.184 [2024-12-13 19:22:54.165530] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x181d00 00:28:20.184 [2024-12-13 19:22:54.165539] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.184 [2024-12-13 19:22:54.165547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.184 [2024-12-13 19:22:54.165566] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.184 [2024-12-13 19:22:54.165572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:28:20.184 [2024-12-13 19:22:54.165578] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x181d00 00:28:20.184 [2024-12-13 19:22:54.165587] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.184 [2024-12-13 19:22:54.165595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.184 [2024-12-13 19:22:54.165608] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.184 [2024-12-13 19:22:54.165614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:28:20.184 [2024-12-13 19:22:54.165621] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x181d00 00:28:20.184 [2024-12-13 19:22:54.165629] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.184 [2024-12-13 19:22:54.165637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.184 [2024-12-13 19:22:54.165659] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.184 [2024-12-13 19:22:54.165664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:28:20.184 [2024-12-13 19:22:54.165671] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x181d00 00:28:20.184 [2024-12-13 19:22:54.165679] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.184 [2024-12-13 19:22:54.165689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.184 [2024-12-13 19:22:54.165704] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.184 [2024-12-13 19:22:54.165710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:28:20.184 [2024-12-13 19:22:54.165716] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x181d00 00:28:20.184 [2024-12-13 19:22:54.165725] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.184 [2024-12-13 19:22:54.165732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.184 [2024-12-13 19:22:54.165750] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.184 [2024-12-13 19:22:54.165755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:28:20.184 [2024-12-13 19:22:54.165762] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x181d00 00:28:20.184 [2024-12-13 19:22:54.165770] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.184 [2024-12-13 19:22:54.165778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.184 [2024-12-13 19:22:54.165795] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.184 [2024-12-13 19:22:54.165801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:28:20.184 [2024-12-13 19:22:54.165807] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x181d00 00:28:20.184 [2024-12-13 19:22:54.165816] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.184 [2024-12-13 19:22:54.165823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.184 [2024-12-13 19:22:54.165842] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.184 [2024-12-13 19:22:54.165848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:28:20.184 [2024-12-13 19:22:54.165854] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x181d00 00:28:20.184 [2024-12-13 19:22:54.165863] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.184 [2024-12-13 19:22:54.165870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.185 [2024-12-13 19:22:54.165886] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.185 [2024-12-13 19:22:54.165891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:28:20.185 [2024-12-13 19:22:54.165898] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.165906] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.165914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.185 [2024-12-13 19:22:54.165935] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.185 [2024-12-13 19:22:54.165940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:28:20.185 [2024-12-13 19:22:54.165946] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.165957] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.165964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.185 [2024-12-13 19:22:54.165982] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.185 [2024-12-13 19:22:54.165987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:28:20.185 [2024-12-13 19:22:54.165993] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166002] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.185 [2024-12-13 19:22:54.166025] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.185 [2024-12-13 19:22:54.166034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:28:20.185 [2024-12-13 19:22:54.166047] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166056] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.185 [2024-12-13 19:22:54.166079] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.185 [2024-12-13 19:22:54.166085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:20.185 [2024-12-13 19:22:54.166091] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166100] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.185 [2024-12-13 19:22:54.166121] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.185 [2024-12-13 19:22:54.166126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:28:20.185 [2024-12-13 19:22:54.166133] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166141] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.185 [2024-12-13 19:22:54.166172] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.185 [2024-12-13 19:22:54.166177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:28:20.185 [2024-12-13 19:22:54.166183] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166192] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.185 [2024-12-13 19:22:54.166215] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.185 [2024-12-13 19:22:54.166220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:28:20.185 [2024-12-13 19:22:54.166226] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166237] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.185 [2024-12-13 19:22:54.166262] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.185 [2024-12-13 19:22:54.166268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:28:20.185 [2024-12-13 19:22:54.166274] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166283] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.185 [2024-12-13 19:22:54.166313] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.185 [2024-12-13 19:22:54.166318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:28:20.185 [2024-12-13 19:22:54.166325] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166333] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.185 [2024-12-13 19:22:54.166358] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.185 [2024-12-13 19:22:54.166364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:28:20.185 [2024-12-13 19:22:54.166370] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166378] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.185 [2024-12-13 19:22:54.166405] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.185 [2024-12-13 19:22:54.166410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:28:20.185 [2024-12-13 19:22:54.166417] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166425] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.185 [2024-12-13 19:22:54.166452] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.185 [2024-12-13 19:22:54.166457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:28:20.185 [2024-12-13 19:22:54.166464] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166472] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.185 [2024-12-13 19:22:54.166501] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.185 [2024-12-13 19:22:54.166506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:28:20.185 [2024-12-13 19:22:54.166514] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166523] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.185 [2024-12-13 19:22:54.166546] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.185 [2024-12-13 19:22:54.166551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:28:20.185 [2024-12-13 19:22:54.166557] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166566] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.185 [2024-12-13 19:22:54.166593] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.185 [2024-12-13 19:22:54.166598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:28:20.185 [2024-12-13 19:22:54.166604] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166613] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.185 [2024-12-13 19:22:54.166640] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.185 [2024-12-13 19:22:54.166645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:28:20.185 [2024-12-13 19:22:54.166651] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166660] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.185 [2024-12-13 19:22:54.166684] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.185 [2024-12-13 19:22:54.166690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:28:20.185 [2024-12-13 19:22:54.166696] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166705] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.185 [2024-12-13 19:22:54.166712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.185 [2024-12-13 19:22:54.166731] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.185 [2024-12-13 19:22:54.166737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:28:20.185 [2024-12-13 19:22:54.166743] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.166751] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.166759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.186 [2024-12-13 19:22:54.166778] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.186 [2024-12-13 19:22:54.166783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:28:20.186 [2024-12-13 19:22:54.166791] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.166800] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.166807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.186 [2024-12-13 19:22:54.166825] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.186 [2024-12-13 19:22:54.166830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:28:20.186 [2024-12-13 19:22:54.166836] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.166845] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.166852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.186 [2024-12-13 19:22:54.166870] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.186 [2024-12-13 19:22:54.166875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:28:20.186 [2024-12-13 19:22:54.166881] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.166890] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.166897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.186 [2024-12-13 19:22:54.166915] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.186 [2024-12-13 19:22:54.166920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:28:20.186 [2024-12-13 19:22:54.166927] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.166935] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.166943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.186 [2024-12-13 19:22:54.166966] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.186 [2024-12-13 19:22:54.166971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:28:20.186 [2024-12-13 19:22:54.166977] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.166986] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.166993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.186 [2024-12-13 19:22:54.167013] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.186 [2024-12-13 19:22:54.167018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:28:20.186 [2024-12-13 19:22:54.167024] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167033] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.186 [2024-12-13 19:22:54.167063] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.186 [2024-12-13 19:22:54.167070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:28:20.186 [2024-12-13 19:22:54.167077] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167085] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.186 [2024-12-13 19:22:54.167108] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.186 [2024-12-13 19:22:54.167114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:28:20.186 [2024-12-13 19:22:54.167120] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167128] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.186 [2024-12-13 19:22:54.167160] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.186 [2024-12-13 19:22:54.167165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:28:20.186 [2024-12-13 19:22:54.167171] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167180] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.186 [2024-12-13 19:22:54.167208] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.186 [2024-12-13 19:22:54.167214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:28:20.186 [2024-12-13 19:22:54.167220] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167228] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.186 [2024-12-13 19:22:54.167257] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.186 [2024-12-13 19:22:54.167263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:28:20.186 [2024-12-13 19:22:54.167269] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167278] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.186 [2024-12-13 19:22:54.167303] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.186 [2024-12-13 19:22:54.167308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:28:20.186 [2024-12-13 19:22:54.167314] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167323] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.186 [2024-12-13 19:22:54.167350] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.186 [2024-12-13 19:22:54.167357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:28:20.186 [2024-12-13 19:22:54.167363] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167372] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.186 [2024-12-13 19:22:54.167397] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.186 [2024-12-13 19:22:54.167402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:28:20.186 [2024-12-13 19:22:54.167408] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167417] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.186 [2024-12-13 19:22:54.167442] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.186 [2024-12-13 19:22:54.167447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:28:20.186 [2024-12-13 19:22:54.167453] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167462] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.186 [2024-12-13 19:22:54.167491] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.186 [2024-12-13 19:22:54.167496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:28:20.186 [2024-12-13 19:22:54.167502] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167511] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.186 [2024-12-13 19:22:54.167537] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.186 [2024-12-13 19:22:54.167543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:28:20.186 [2024-12-13 19:22:54.167549] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167558] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.186 [2024-12-13 19:22:54.167583] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.186 [2024-12-13 19:22:54.167588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:20.186 [2024-12-13 19:22:54.167594] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167603] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.186 [2024-12-13 19:22:54.167610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.186 [2024-12-13 19:22:54.167634] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.187 [2024-12-13 19:22:54.167640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:28:20.187 [2024-12-13 19:22:54.167646] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.167654] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.167662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.187 [2024-12-13 19:22:54.167679] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.187 [2024-12-13 19:22:54.167685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:28:20.187 [2024-12-13 19:22:54.167691] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.167699] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.167707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.187 [2024-12-13 19:22:54.167723] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.187 [2024-12-13 19:22:54.167728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:28:20.187 [2024-12-13 19:22:54.167734] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.167743] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.167750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.187 [2024-12-13 19:22:54.167769] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.187 [2024-12-13 19:22:54.167775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:28:20.187 [2024-12-13 19:22:54.167781] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.167790] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.167797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.187 [2024-12-13 19:22:54.167818] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.187 [2024-12-13 19:22:54.167824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:28:20.187 [2024-12-13 19:22:54.167830] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.167839] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.167846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.187 [2024-12-13 19:22:54.167866] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.187 [2024-12-13 19:22:54.167871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:28:20.187 [2024-12-13 19:22:54.167877] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.167886] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.167893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.187 [2024-12-13 19:22:54.167914] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.187 [2024-12-13 19:22:54.167920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:28:20.187 [2024-12-13 19:22:54.167926] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.167934] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.167942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.187 [2024-12-13 19:22:54.167957] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.187 [2024-12-13 19:22:54.167963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:28:20.187 [2024-12-13 19:22:54.167969] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.167978] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.167985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.187 [2024-12-13 19:22:54.168001] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.187 [2024-12-13 19:22:54.168006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:28:20.187 [2024-12-13 19:22:54.168012] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.168021] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.168028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.187 [2024-12-13 19:22:54.172045] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.187 [2024-12-13 19:22:54.172052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:28:20.187 [2024-12-13 19:22:54.172059] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.172068] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.172075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.187 [2024-12-13 19:22:54.172099] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.187 [2024-12-13 19:22:54.172104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000b p:0 m:0 dnr:0 00:28:20.187 [2024-12-13 19:22:54.172111] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.172117] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:28:20.187 128 00:28:20.187 Transport Service Identifier: 4420 00:28:20.187 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:20.187 Transport Address: 192.168.100.8 00:28:20.187 Transport Specific Address Subtype - RDMA 00:28:20.187 RDMA QP Service Type: 1 (Reliable Connected) 00:28:20.187 RDMA Provider Type: 1 (No provider specified) 00:28:20.187 RDMA CM Service: 1 (RDMA_CM) 00:28:20.187 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:20.187 [2024-12-13 19:22:54.242945] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:20.187 [2024-12-13 19:22:54.242994] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid429196 ] 00:28:20.187 [2024-12-13 19:22:54.303252] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:28:20.187 [2024-12-13 19:22:54.303308] nvme_rdma.c:2017:nvme_rdma_ctrlr_create_qpair: *DEBUG*: rqpair 0x2000003d7040, append_copy diabled 00:28:20.187 [2024-12-13 19:22:54.303326] nvme_rdma.c:2460:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:28:20.187 [2024-12-13 19:22:54.303341] nvme_rdma.c:1238:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:28:20.187 [2024-12-13 19:22:54.303345] nvme_rdma.c:1242:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:28:20.187 [2024-12-13 19:22:54.303370] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:28:20.187 [2024-12-13 19:22:54.313616] nvme_rdma.c: 459:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:28:20.187 [2024-12-13 19:22:54.323676] nvme_rdma.c:1124:nvme_rdma_connect_established: *DEBUG*: rc =0 00:28:20.187 [2024-12-13 19:22:54.323686] nvme_rdma.c:1129:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:28:20.187 [2024-12-13 19:22:54.323692] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.323699] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.323705] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.323711] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.323717] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.323723] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.323729] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.323735] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.323742] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.323748] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.323754] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.323760] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.323766] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.323772] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.323778] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.323784] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.323790] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x181d00 00:28:20.187 [2024-12-13 19:22:54.323796] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.323804] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.323810] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.323816] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.323822] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.323828] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.323834] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.323840] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.323847] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.323853] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.323859] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.323865] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.323871] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.323877] nvme_rdma.c: 912:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.323882] nvme_rdma.c:1143:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:28:20.188 [2024-12-13 19:22:54.323887] nvme_rdma.c:1146:nvme_rdma_connect_established: *DEBUG*: rc =0 00:28:20.188 [2024-12-13 19:22:54.323892] nvme_rdma.c:1151:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:28:20.188 [2024-12-13 19:22:54.323906] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.323918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd0c0 len:0x400 key:0x181d00 00:28:20.188 [2024-12-13 19:22:54.329046] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.188 [2024-12-13 19:22:54.329055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:28:20.188 [2024-12-13 19:22:54.329062] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.329069] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:20.188 [2024-12-13 19:22:54.329076] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:28:20.188 [2024-12-13 19:22:54.329082] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:28:20.188 [2024-12-13 19:22:54.329097] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.329105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.188 [2024-12-13 19:22:54.329133] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.188 [2024-12-13 19:22:54.329139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:28:20.188 [2024-12-13 19:22:54.329147] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:28:20.188 [2024-12-13 19:22:54.329153] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.329160] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:28:20.188 [2024-12-13 19:22:54.329170] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.329177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.188 [2024-12-13 19:22:54.329193] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.188 [2024-12-13 19:22:54.329199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:28:20.188 [2024-12-13 19:22:54.329206] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:28:20.188 [2024-12-13 19:22:54.329211] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.329218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:28:20.188 [2024-12-13 19:22:54.329226] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.329234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.188 [2024-12-13 19:22:54.329253] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.188 [2024-12-13 19:22:54.329259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:20.188 [2024-12-13 19:22:54.329265] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:20.188 [2024-12-13 19:22:54.329271] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.329280] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.329287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.188 [2024-12-13 19:22:54.329305] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.188 [2024-12-13 19:22:54.329311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:20.188 [2024-12-13 19:22:54.329317] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:28:20.188 [2024-12-13 19:22:54.329323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:28:20.188 [2024-12-13 19:22:54.329329] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.329335] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:20.188 [2024-12-13 19:22:54.329444] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:28:20.188 [2024-12-13 19:22:54.329450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:20.188 [2024-12-13 19:22:54.329458] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.329466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.188 [2024-12-13 19:22:54.329485] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.188 [2024-12-13 19:22:54.329490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:20.188 [2024-12-13 19:22:54.329500] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:20.188 [2024-12-13 19:22:54.329506] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.329514] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.329522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.188 [2024-12-13 19:22:54.329542] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.188 [2024-12-13 19:22:54.329548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:28:20.188 [2024-12-13 19:22:54.329554] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:20.188 [2024-12-13 19:22:54.329560] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:28:20.188 [2024-12-13 19:22:54.329566] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.329572] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:28:20.188 [2024-12-13 19:22:54.329584] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:28:20.188 [2024-12-13 19:22:54.329593] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.329601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x181d00 00:28:20.188 [2024-12-13 19:22:54.329646] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.188 [2024-12-13 19:22:54.329651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:20.188 [2024-12-13 19:22:54.329660] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:28:20.188 [2024-12-13 19:22:54.329665] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:28:20.188 [2024-12-13 19:22:54.329671] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:28:20.188 [2024-12-13 19:22:54.329676] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:28:20.188 [2024-12-13 19:22:54.329682] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:28:20.188 [2024-12-13 19:22:54.329688] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:28:20.188 [2024-12-13 19:22:54.329693] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.329701] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:28:20.188 [2024-12-13 19:22:54.329708] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.188 [2024-12-13 19:22:54.329716] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.188 [2024-12-13 19:22:54.329742] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.189 [2024-12-13 19:22:54.329747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:20.189 [2024-12-13 19:22:54.329755] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce3c0 length 0x40 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.329764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.189 [2024-12-13 19:22:54.329771] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce500 length 0x40 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.329778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.189 [2024-12-13 19:22:54.329785] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.329792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.189 [2024-12-13 19:22:54.329799] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce780 length 0x40 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.329806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.189 [2024-12-13 19:22:54.329812] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:20.189 [2024-12-13 19:22:54.329817] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.329827] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:20.189 [2024-12-13 19:22:54.329835] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.329843] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.189 [2024-12-13 19:22:54.329859] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.189 [2024-12-13 19:22:54.329864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:28:20.189 [2024-12-13 19:22:54.329871] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:28:20.189 [2024-12-13 19:22:54.329876] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:20.189 [2024-12-13 19:22:54.329882] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.329891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:28:20.189 [2024-12-13 19:22:54.329899] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:20.189 [2024-12-13 19:22:54.329906] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.329914] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.189 [2024-12-13 19:22:54.329933] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.189 [2024-12-13 19:22:54.329939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:28:20.189 [2024-12-13 19:22:54.329988] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:28:20.189 [2024-12-13 19:22:54.329994] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.330002] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:20.189 [2024-12-13 19:22:54.330012] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.330020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ca000 len:0x1000 key:0x181d00 00:28:20.189 [2024-12-13 19:22:54.330045] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.189 [2024-12-13 19:22:54.330051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:20.189 [2024-12-13 19:22:54.330063] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:28:20.189 [2024-12-13 19:22:54.330072] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:28:20.189 [2024-12-13 19:22:54.330078] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.330086] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:28:20.189 [2024-12-13 19:22:54.330094] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.330102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x181d00 00:28:20.189 [2024-12-13 19:22:54.330137] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.189 [2024-12-13 19:22:54.330142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:20.189 [2024-12-13 19:22:54.330156] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:20.189 [2024-12-13 19:22:54.330162] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.330169] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:20.189 [2024-12-13 19:22:54.330177] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.330185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x181d00 00:28:20.189 [2024-12-13 19:22:54.330214] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.189 [2024-12-13 19:22:54.330220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:20.189 [2024-12-13 19:22:54.330229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:20.189 [2024-12-13 19:22:54.330235] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.330242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:28:20.189 [2024-12-13 19:22:54.330250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:28:20.189 [2024-12-13 19:22:54.330259] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:28:20.189 [2024-12-13 19:22:54.330265] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:20.189 [2024-12-13 19:22:54.330271] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:28:20.189 [2024-12-13 19:22:54.330279] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:28:20.189 [2024-12-13 19:22:54.330284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:28:20.189 [2024-12-13 19:22:54.330291] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:28:20.189 [2024-12-13 19:22:54.330305] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.330313] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.189 [2024-12-13 19:22:54.330321] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce8c0 length 0x40 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.330328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.189 [2024-12-13 19:22:54.330338] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.189 [2024-12-13 19:22:54.330344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:20.189 [2024-12-13 19:22:54.330350] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.330356] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.189 [2024-12-13 19:22:54.330361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:20.189 [2024-12-13 19:22:54.330368] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.330377] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce8c0 length 0x40 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.330385] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.189 [2024-12-13 19:22:54.330408] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.189 [2024-12-13 19:22:54.330413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:20.189 [2024-12-13 19:22:54.330419] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.330429] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce8c0 length 0x40 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.330436] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.189 [2024-12-13 19:22:54.330453] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.189 [2024-12-13 19:22:54.330458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:20.189 [2024-12-13 19:22:54.330464] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.330474] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce8c0 length 0x40 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.330481] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.189 [2024-12-13 19:22:54.330501] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.189 [2024-12-13 19:22:54.330507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:28:20.189 [2024-12-13 19:22:54.330513] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.330526] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce8c0 length 0x40 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.330535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x2000 key:0x181d00 00:28:20.189 [2024-12-13 19:22:54.330544] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:28:20.189 [2024-12-13 19:22:54.330551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd000 len:0x200 key:0x181d00 00:28:20.190 [2024-12-13 19:22:54.330559] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cea00 length 0x40 lkey 0x181d00 00:28:20.190 [2024-12-13 19:22:54.330567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x200 key:0x181d00 00:28:20.190 [2024-12-13 19:22:54.330575] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ceb40 length 0x40 lkey 0x181d00 00:28:20.190 [2024-12-13 19:22:54.330582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c5000 len:0x1000 key:0x181d00 00:28:20.190 [2024-12-13 19:22:54.330591] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.190 [2024-12-13 19:22:54.330596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:20.190 [2024-12-13 19:22:54.330609] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x181d00 00:28:20.190 [2024-12-13 19:22:54.330616] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.190 [2024-12-13 19:22:54.330621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:20.190 [2024-12-13 19:22:54.330631] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x181d00 00:28:20.190 [2024-12-13 19:22:54.330638] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.190 [2024-12-13 19:22:54.330643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:20.190 [2024-12-13 19:22:54.330650] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x181d00 00:28:20.190 [2024-12-13 19:22:54.330656] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.190 [2024-12-13 19:22:54.330661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:20.190 [2024-12-13 19:22:54.330670] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x181d00 00:28:20.190 ===================================================== 00:28:20.190 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:20.190 ===================================================== 00:28:20.190 Controller Capabilities/Features 00:28:20.190 ================================ 00:28:20.190 Vendor ID: 8086 00:28:20.190 Subsystem Vendor ID: 8086 00:28:20.190 Serial Number: SPDK00000000000001 00:28:20.190 Model Number: SPDK bdev Controller 00:28:20.190 Firmware Version: 25.01 00:28:20.190 Recommended Arb Burst: 6 00:28:20.190 IEEE OUI Identifier: e4 d2 5c 00:28:20.190 Multi-path I/O 00:28:20.190 May have multiple subsystem ports: Yes 00:28:20.190 May have multiple controllers: Yes 00:28:20.190 Associated with SR-IOV VF: No 00:28:20.190 Max Data Transfer Size: 131072 00:28:20.190 Max Number of Namespaces: 32 00:28:20.190 Max Number of I/O Queues: 127 00:28:20.190 NVMe Specification Version (VS): 1.3 00:28:20.190 NVMe Specification Version (Identify): 1.3 00:28:20.190 Maximum Queue Entries: 128 00:28:20.190 Contiguous Queues Required: Yes 00:28:20.190 Arbitration Mechanisms Supported 00:28:20.190 Weighted Round Robin: Not Supported 00:28:20.190 Vendor Specific: Not Supported 00:28:20.190 Reset Timeout: 15000 ms 00:28:20.190 Doorbell Stride: 4 bytes 00:28:20.190 NVM Subsystem Reset: Not Supported 00:28:20.190 Command Sets Supported 00:28:20.190 NVM Command Set: Supported 00:28:20.190 Boot Partition: Not Supported 00:28:20.190 Memory Page Size Minimum: 4096 bytes 00:28:20.190 Memory Page Size Maximum: 4096 bytes 00:28:20.190 Persistent Memory Region: Not Supported 00:28:20.190 Optional Asynchronous Events Supported 00:28:20.190 Namespace Attribute Notices: Supported 00:28:20.190 Firmware Activation Notices: Not Supported 00:28:20.190 ANA Change Notices: Not Supported 00:28:20.190 PLE Aggregate Log Change Notices: Not Supported 00:28:20.190 LBA Status Info Alert Notices: Not Supported 00:28:20.190 EGE Aggregate Log Change Notices: Not Supported 00:28:20.190 Normal NVM Subsystem Shutdown event: Not Supported 00:28:20.190 Zone Descriptor Change Notices: Not Supported 00:28:20.190 Discovery Log Change Notices: Not Supported 00:28:20.190 Controller Attributes 00:28:20.190 128-bit Host Identifier: Supported 00:28:20.190 Non-Operational Permissive Mode: Not Supported 00:28:20.190 NVM Sets: Not Supported 00:28:20.190 Read Recovery Levels: Not Supported 00:28:20.190 Endurance Groups: Not Supported 00:28:20.190 Predictable Latency Mode: Not Supported 00:28:20.190 Traffic Based Keep ALive: Not Supported 00:28:20.190 Namespace Granularity: Not Supported 00:28:20.190 SQ Associations: Not Supported 00:28:20.190 UUID List: Not Supported 00:28:20.190 Multi-Domain Subsystem: Not Supported 00:28:20.190 Fixed Capacity Management: Not Supported 00:28:20.190 Variable Capacity Management: Not Supported 00:28:20.190 Delete Endurance Group: Not Supported 00:28:20.190 Delete NVM Set: Not Supported 00:28:20.190 Extended LBA Formats Supported: Not Supported 00:28:20.190 Flexible Data Placement Supported: Not Supported 00:28:20.190 00:28:20.190 Controller Memory Buffer Support 00:28:20.190 ================================ 00:28:20.190 Supported: No 00:28:20.190 00:28:20.190 Persistent Memory Region Support 00:28:20.190 ================================ 00:28:20.190 Supported: No 00:28:20.190 00:28:20.190 Admin Command Set Attributes 00:28:20.190 ============================ 00:28:20.190 Security Send/Receive: Not Supported 00:28:20.190 Format NVM: Not Supported 00:28:20.190 Firmware Activate/Download: Not Supported 00:28:20.190 Namespace Management: Not Supported 00:28:20.190 Device Self-Test: Not Supported 00:28:20.190 Directives: Not Supported 00:28:20.190 NVMe-MI: Not Supported 00:28:20.190 Virtualization Management: Not Supported 00:28:20.190 Doorbell Buffer Config: Not Supported 00:28:20.190 Get LBA Status Capability: Not Supported 00:28:20.190 Command & Feature Lockdown Capability: Not Supported 00:28:20.190 Abort Command Limit: 4 00:28:20.190 Async Event Request Limit: 4 00:28:20.190 Number of Firmware Slots: N/A 00:28:20.190 Firmware Slot 1 Read-Only: N/A 00:28:20.190 Firmware Activation Without Reset: N/A 00:28:20.190 Multiple Update Detection Support: N/A 00:28:20.190 Firmware Update Granularity: No Information Provided 00:28:20.190 Per-Namespace SMART Log: No 00:28:20.190 Asymmetric Namespace Access Log Page: Not Supported 00:28:20.190 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:20.190 Command Effects Log Page: Supported 00:28:20.190 Get Log Page Extended Data: Supported 00:28:20.190 Telemetry Log Pages: Not Supported 00:28:20.190 Persistent Event Log Pages: Not Supported 00:28:20.190 Supported Log Pages Log Page: May Support 00:28:20.190 Commands Supported & Effects Log Page: Not Supported 00:28:20.190 Feature Identifiers & Effects Log Page:May Support 00:28:20.190 NVMe-MI Commands & Effects Log Page: May Support 00:28:20.190 Data Area 4 for Telemetry Log: Not Supported 00:28:20.190 Error Log Page Entries Supported: 128 00:28:20.190 Keep Alive: Supported 00:28:20.190 Keep Alive Granularity: 10000 ms 00:28:20.190 00:28:20.190 NVM Command Set Attributes 00:28:20.190 ========================== 00:28:20.190 Submission Queue Entry Size 00:28:20.190 Max: 64 00:28:20.190 Min: 64 00:28:20.190 Completion Queue Entry Size 00:28:20.190 Max: 16 00:28:20.190 Min: 16 00:28:20.190 Number of Namespaces: 32 00:28:20.190 Compare Command: Supported 00:28:20.190 Write Uncorrectable Command: Not Supported 00:28:20.190 Dataset Management Command: Supported 00:28:20.190 Write Zeroes Command: Supported 00:28:20.190 Set Features Save Field: Not Supported 00:28:20.190 Reservations: Supported 00:28:20.190 Timestamp: Not Supported 00:28:20.190 Copy: Supported 00:28:20.190 Volatile Write Cache: Present 00:28:20.190 Atomic Write Unit (Normal): 1 00:28:20.190 Atomic Write Unit (PFail): 1 00:28:20.190 Atomic Compare & Write Unit: 1 00:28:20.190 Fused Compare & Write: Supported 00:28:20.190 Scatter-Gather List 00:28:20.190 SGL Command Set: Supported 00:28:20.190 SGL Keyed: Supported 00:28:20.190 SGL Bit Bucket Descriptor: Not Supported 00:28:20.190 SGL Metadata Pointer: Not Supported 00:28:20.190 Oversized SGL: Not Supported 00:28:20.190 SGL Metadata Address: Not Supported 00:28:20.190 SGL Offset: Supported 00:28:20.190 Transport SGL Data Block: Not Supported 00:28:20.190 Replay Protected Memory Block: Not Supported 00:28:20.190 00:28:20.190 Firmware Slot Information 00:28:20.190 ========================= 00:28:20.190 Active slot: 1 00:28:20.190 Slot 1 Firmware Revision: 25.01 00:28:20.190 00:28:20.190 00:28:20.190 Commands Supported and Effects 00:28:20.190 ============================== 00:28:20.190 Admin Commands 00:28:20.190 -------------- 00:28:20.190 Get Log Page (02h): Supported 00:28:20.190 Identify (06h): Supported 00:28:20.190 Abort (08h): Supported 00:28:20.190 Set Features (09h): Supported 00:28:20.190 Get Features (0Ah): Supported 00:28:20.190 Asynchronous Event Request (0Ch): Supported 00:28:20.190 Keep Alive (18h): Supported 00:28:20.190 I/O Commands 00:28:20.190 ------------ 00:28:20.190 Flush (00h): Supported LBA-Change 00:28:20.190 Write (01h): Supported LBA-Change 00:28:20.190 Read (02h): Supported 00:28:20.190 Compare (05h): Supported 00:28:20.190 Write Zeroes (08h): Supported LBA-Change 00:28:20.190 Dataset Management (09h): Supported LBA-Change 00:28:20.190 Copy (19h): Supported LBA-Change 00:28:20.190 00:28:20.190 Error Log 00:28:20.190 ========= 00:28:20.190 00:28:20.190 Arbitration 00:28:20.190 =========== 00:28:20.190 Arbitration Burst: 1 00:28:20.190 00:28:20.191 Power Management 00:28:20.191 ================ 00:28:20.191 Number of Power States: 1 00:28:20.191 Current Power State: Power State #0 00:28:20.191 Power State #0: 00:28:20.191 Max Power: 0.00 W 00:28:20.191 Non-Operational State: Operational 00:28:20.191 Entry Latency: Not Reported 00:28:20.191 Exit Latency: Not Reported 00:28:20.191 Relative Read Throughput: 0 00:28:20.191 Relative Read Latency: 0 00:28:20.191 Relative Write Throughput: 0 00:28:20.191 Relative Write Latency: 0 00:28:20.191 Idle Power: Not Reported 00:28:20.191 Active Power: Not Reported 00:28:20.191 Non-Operational Permissive Mode: Not Supported 00:28:20.191 00:28:20.191 Health Information 00:28:20.191 ================== 00:28:20.191 Critical Warnings: 00:28:20.191 Available Spare Space: OK 00:28:20.191 Temperature: OK 00:28:20.191 Device Reliability: OK 00:28:20.191 Read Only: No 00:28:20.191 Volatile Memory Backup: OK 00:28:20.191 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:20.191 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:28:20.191 Available Spare: 0% 00:28:20.191 Available Spare Threshold: 0% 00:28:20.191 Life Percentage [2024-12-13 19:22:54.330746] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ceb40 length 0x40 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.330755] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.191 [2024-12-13 19:22:54.330778] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.191 [2024-12-13 19:22:54.330783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:20.191 [2024-12-13 19:22:54.330789] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.330819] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:28:20.191 [2024-12-13 19:22:54.330828] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 62915 doesn't match qid 00:28:20.191 [2024-12-13 19:22:54.330842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32752 cdw0:1c7d6d0 sqhd:5e00 p:0 m:0 dnr:0 00:28:20.191 [2024-12-13 19:22:54.330849] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 62915 doesn't match qid 00:28:20.191 [2024-12-13 19:22:54.330857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32752 cdw0:1c7d6d0 sqhd:5e00 p:0 m:0 dnr:0 00:28:20.191 [2024-12-13 19:22:54.330864] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 62915 doesn't match qid 00:28:20.191 [2024-12-13 19:22:54.330871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32752 cdw0:1c7d6d0 sqhd:5e00 p:0 m:0 dnr:0 00:28:20.191 [2024-12-13 19:22:54.330877] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 62915 doesn't match qid 00:28:20.191 [2024-12-13 19:22:54.330885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32752 cdw0:1c7d6d0 sqhd:5e00 p:0 m:0 dnr:0 00:28:20.191 [2024-12-13 19:22:54.330894] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce780 length 0x40 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.330902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.191 [2024-12-13 19:22:54.330916] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.191 [2024-12-13 19:22:54.330922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:28:20.191 [2024-12-13 19:22:54.330930] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.330937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.191 [2024-12-13 19:22:54.330944] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.330963] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.191 [2024-12-13 19:22:54.330969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:20.191 [2024-12-13 19:22:54.330975] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:28:20.191 [2024-12-13 19:22:54.330981] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:28:20.191 [2024-12-13 19:22:54.330987] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.330996] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.331004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.191 [2024-12-13 19:22:54.331025] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.191 [2024-12-13 19:22:54.331031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:28:20.191 [2024-12-13 19:22:54.331037] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.331050] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.331058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.191 [2024-12-13 19:22:54.331076] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.191 [2024-12-13 19:22:54.331081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:28:20.191 [2024-12-13 19:22:54.331088] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.331096] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.331104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.191 [2024-12-13 19:22:54.331120] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.191 [2024-12-13 19:22:54.331125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:28:20.191 [2024-12-13 19:22:54.331132] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.331140] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.331148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.191 [2024-12-13 19:22:54.331168] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.191 [2024-12-13 19:22:54.331174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:28:20.191 [2024-12-13 19:22:54.331180] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.331189] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.331197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.191 [2024-12-13 19:22:54.331215] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.191 [2024-12-13 19:22:54.331220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:28:20.191 [2024-12-13 19:22:54.331227] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.331236] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.331244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.191 [2024-12-13 19:22:54.331263] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.191 [2024-12-13 19:22:54.331269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:20.191 [2024-12-13 19:22:54.331275] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.331284] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.331292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.191 [2024-12-13 19:22:54.331316] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.191 [2024-12-13 19:22:54.331321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:28:20.191 [2024-12-13 19:22:54.331327] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.331336] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.331344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.191 [2024-12-13 19:22:54.331363] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.191 [2024-12-13 19:22:54.331369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:28:20.191 [2024-12-13 19:22:54.331375] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.331384] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.331395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.191 [2024-12-13 19:22:54.331412] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.191 [2024-12-13 19:22:54.331418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:28:20.191 [2024-12-13 19:22:54.331424] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.331433] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.331440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.191 [2024-12-13 19:22:54.331456] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.191 [2024-12-13 19:22:54.331462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:28:20.191 [2024-12-13 19:22:54.331468] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.331476] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.191 [2024-12-13 19:22:54.331484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.191 [2024-12-13 19:22:54.331503] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.191 [2024-12-13 19:22:54.331509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:28:20.192 [2024-12-13 19:22:54.331515] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.331524] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.331531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.192 [2024-12-13 19:22:54.331547] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.192 [2024-12-13 19:22:54.331552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:28:20.192 [2024-12-13 19:22:54.331559] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.331567] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.331575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.192 [2024-12-13 19:22:54.331597] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.192 [2024-12-13 19:22:54.331602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:28:20.192 [2024-12-13 19:22:54.331608] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.331617] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.331624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.192 [2024-12-13 19:22:54.331644] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.192 [2024-12-13 19:22:54.331649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:28:20.192 [2024-12-13 19:22:54.331656] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.331664] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.331673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.192 [2024-12-13 19:22:54.331695] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.192 [2024-12-13 19:22:54.331700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:28:20.192 [2024-12-13 19:22:54.331706] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.331715] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.331722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.192 [2024-12-13 19:22:54.331746] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.192 [2024-12-13 19:22:54.331751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:28:20.192 [2024-12-13 19:22:54.331757] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.331766] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.331774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.192 [2024-12-13 19:22:54.331791] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.192 [2024-12-13 19:22:54.331797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:28:20.192 [2024-12-13 19:22:54.331803] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.331811] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.331819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.192 [2024-12-13 19:22:54.331842] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.192 [2024-12-13 19:22:54.331848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:28:20.192 [2024-12-13 19:22:54.331854] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.331863] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.331870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.192 [2024-12-13 19:22:54.331890] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.192 [2024-12-13 19:22:54.331895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:28:20.192 [2024-12-13 19:22:54.331901] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.331910] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.331917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.192 [2024-12-13 19:22:54.331939] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.192 [2024-12-13 19:22:54.331944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:28:20.192 [2024-12-13 19:22:54.331950] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.331960] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.331968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.192 [2024-12-13 19:22:54.331989] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.192 [2024-12-13 19:22:54.331995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:28:20.192 [2024-12-13 19:22:54.332001] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.332010] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.332017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.192 [2024-12-13 19:22:54.332037] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.192 [2024-12-13 19:22:54.332047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:28:20.192 [2024-12-13 19:22:54.332054] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.332062] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.332070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.192 [2024-12-13 19:22:54.332085] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.192 [2024-12-13 19:22:54.332091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:28:20.192 [2024-12-13 19:22:54.332097] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.332106] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.332114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.192 [2024-12-13 19:22:54.332129] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.192 [2024-12-13 19:22:54.332134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:28:20.192 [2024-12-13 19:22:54.332141] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.332149] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.332157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.192 [2024-12-13 19:22:54.332177] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.192 [2024-12-13 19:22:54.332182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:28:20.192 [2024-12-13 19:22:54.332189] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.332197] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.332205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.192 [2024-12-13 19:22:54.332226] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.192 [2024-12-13 19:22:54.332231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:28:20.192 [2024-12-13 19:22:54.332239] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.332248] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.332255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.192 [2024-12-13 19:22:54.332277] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.192 [2024-12-13 19:22:54.332282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:28:20.192 [2024-12-13 19:22:54.332288] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x181d00 00:28:20.192 [2024-12-13 19:22:54.332297] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.193 [2024-12-13 19:22:54.332327] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.193 [2024-12-13 19:22:54.332333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:28:20.193 [2024-12-13 19:22:54.332339] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332348] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.193 [2024-12-13 19:22:54.332371] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.193 [2024-12-13 19:22:54.332376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:28:20.193 [2024-12-13 19:22:54.332383] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332391] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.193 [2024-12-13 19:22:54.332418] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.193 [2024-12-13 19:22:54.332423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:28:20.193 [2024-12-13 19:22:54.332430] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332438] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.193 [2024-12-13 19:22:54.332471] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.193 [2024-12-13 19:22:54.332476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:28:20.193 [2024-12-13 19:22:54.332483] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332491] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.193 [2024-12-13 19:22:54.332524] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.193 [2024-12-13 19:22:54.332529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:28:20.193 [2024-12-13 19:22:54.332536] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332545] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.193 [2024-12-13 19:22:54.332574] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.193 [2024-12-13 19:22:54.332579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:28:20.193 [2024-12-13 19:22:54.332586] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332594] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.193 [2024-12-13 19:22:54.332619] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.193 [2024-12-13 19:22:54.332625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:28:20.193 [2024-12-13 19:22:54.332631] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332640] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.193 [2024-12-13 19:22:54.332667] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.193 [2024-12-13 19:22:54.332672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:28:20.193 [2024-12-13 19:22:54.332678] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332687] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.193 [2024-12-13 19:22:54.332714] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.193 [2024-12-13 19:22:54.332719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:28:20.193 [2024-12-13 19:22:54.332725] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332734] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.193 [2024-12-13 19:22:54.332761] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.193 [2024-12-13 19:22:54.332766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:28:20.193 [2024-12-13 19:22:54.332773] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332781] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.193 [2024-12-13 19:22:54.332803] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.193 [2024-12-13 19:22:54.332809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:20.193 [2024-12-13 19:22:54.332816] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332824] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.193 [2024-12-13 19:22:54.332855] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.193 [2024-12-13 19:22:54.332861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:28:20.193 [2024-12-13 19:22:54.332867] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332876] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.193 [2024-12-13 19:22:54.332901] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.193 [2024-12-13 19:22:54.332906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:28:20.193 [2024-12-13 19:22:54.332912] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332921] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.193 [2024-12-13 19:22:54.332948] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.193 [2024-12-13 19:22:54.332953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:28:20.193 [2024-12-13 19:22:54.332959] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332968] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.332976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.193 [2024-12-13 19:22:54.332990] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.193 [2024-12-13 19:22:54.332995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:28:20.193 [2024-12-13 19:22:54.333001] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.333010] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.333017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.193 [2024-12-13 19:22:54.333039] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.193 [2024-12-13 19:22:54.337050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:28:20.193 [2024-12-13 19:22:54.337065] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.337074] nvme_rdma.c:2515:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.337082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:20.193 [2024-12-13 19:22:54.337100] nvme_rdma.c:2793:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:20.193 [2024-12-13 19:22:54.337107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0006 p:0 m:0 dnr:0 00:28:20.193 [2024-12-13 19:22:54.337113] nvme_rdma.c:2686:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x181d00 00:28:20.193 [2024-12-13 19:22:54.337120] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:28:20.193 Used: 0% 00:28:20.193 Data Units Read: 0 00:28:20.193 Data Units Written: 0 00:28:20.193 Host Read Commands: 0 00:28:20.193 Host Write Commands: 0 00:28:20.193 Controller Busy Time: 0 minutes 00:28:20.193 Power Cycles: 0 00:28:20.193 Power On Hours: 0 hours 00:28:20.193 Unsafe Shutdowns: 0 00:28:20.193 Unrecoverable Media Errors: 0 00:28:20.193 Lifetime Error Log Entries: 0 00:28:20.193 Warning Temperature Time: 0 minutes 00:28:20.193 Critical Temperature Time: 0 minutes 00:28:20.194 00:28:20.194 Number of Queues 00:28:20.194 ================ 00:28:20.194 Number of I/O Submission Queues: 127 00:28:20.194 Number of I/O Completion Queues: 127 00:28:20.194 00:28:20.194 Active Namespaces 00:28:20.194 ================= 00:28:20.194 Namespace ID:1 00:28:20.194 Error Recovery Timeout: Unlimited 00:28:20.194 Command Set Identifier: NVM (00h) 00:28:20.194 Deallocate: Supported 00:28:20.194 Deallocated/Unwritten Error: Not Supported 00:28:20.194 Deallocated Read Value: Unknown 00:28:20.194 Deallocate in Write Zeroes: Not Supported 00:28:20.194 Deallocated Guard Field: 0xFFFF 00:28:20.194 Flush: Supported 00:28:20.194 Reservation: Supported 00:28:20.194 Namespace Sharing Capabilities: Multiple Controllers 00:28:20.194 Size (in LBAs): 131072 (0GiB) 00:28:20.194 Capacity (in LBAs): 131072 (0GiB) 00:28:20.194 Utilization (in LBAs): 131072 (0GiB) 00:28:20.194 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:20.194 EUI64: ABCDEF0123456789 00:28:20.194 UUID: f677af1b-5f43-42fd-b03b-242aa1f34542 00:28:20.194 Thin Provisioning: Not Supported 00:28:20.194 Per-NS Atomic Units: Yes 00:28:20.194 Atomic Boundary Size (Normal): 0 00:28:20.194 Atomic Boundary Size (PFail): 0 00:28:20.194 Atomic Boundary Offset: 0 00:28:20.194 Maximum Single Source Range Length: 65535 00:28:20.194 Maximum Copy Length: 65535 00:28:20.194 Maximum Source Range Count: 1 00:28:20.194 NGUID/EUI64 Never Reused: No 00:28:20.194 Namespace Write Protected: No 00:28:20.194 Number of LBA Formats: 1 00:28:20.194 Current LBA Format: LBA Format #00 00:28:20.194 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:20.194 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:28:20.194 rmmod nvme_rdma 00:28:20.194 rmmod nvme_fabrics 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 429005 ']' 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 429005 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 429005 ']' 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 429005 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 429005 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 429005' 00:28:20.194 killing process with pid 429005 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 429005 00:28:20.194 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 429005 00:28:20.453 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:20.453 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:28:20.453 00:28:20.453 real 0m8.730s 00:28:20.453 user 0m6.405s 00:28:20.453 sys 0m6.011s 00:28:20.453 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:20.453 19:22:54 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:20.453 ************************************ 00:28:20.453 END TEST nvmf_identify 00:28:20.453 ************************************ 00:28:20.453 19:22:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:28:20.453 19:22:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:20.453 19:22:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:20.453 19:22:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.713 ************************************ 00:28:20.713 START TEST nvmf_perf 00:28:20.713 ************************************ 00:28:20.713 19:22:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:28:20.713 * Looking for test storage... 00:28:20.713 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:20.713 19:22:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:20.713 19:22:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:28:20.713 19:22:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:20.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.713 --rc genhtml_branch_coverage=1 00:28:20.713 --rc genhtml_function_coverage=1 00:28:20.713 --rc genhtml_legend=1 00:28:20.713 --rc geninfo_all_blocks=1 00:28:20.713 --rc geninfo_unexecuted_blocks=1 00:28:20.713 00:28:20.713 ' 00:28:20.713 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:20.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.713 --rc genhtml_branch_coverage=1 00:28:20.713 --rc genhtml_function_coverage=1 00:28:20.714 --rc genhtml_legend=1 00:28:20.714 --rc geninfo_all_blocks=1 00:28:20.714 --rc geninfo_unexecuted_blocks=1 00:28:20.714 00:28:20.714 ' 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:20.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.714 --rc genhtml_branch_coverage=1 00:28:20.714 --rc genhtml_function_coverage=1 00:28:20.714 --rc genhtml_legend=1 00:28:20.714 --rc geninfo_all_blocks=1 00:28:20.714 --rc geninfo_unexecuted_blocks=1 00:28:20.714 00:28:20.714 ' 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:20.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.714 --rc genhtml_branch_coverage=1 00:28:20.714 --rc genhtml_function_coverage=1 00:28:20.714 --rc genhtml_legend=1 00:28:20.714 --rc geninfo_all_blocks=1 00:28:20.714 --rc geninfo_unexecuted_blocks=1 00:28:20.714 00:28:20.714 ' 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:20.714 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:20.714 19:22:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:28.836 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:28.836 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:28.836 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:28.837 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:28.837 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # rdma_device_init 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:28:28.837 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:28.837 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:28.837 altname enp217s0f0np0 00:28:28.837 altname ens818f0np0 00:28:28.837 inet 192.168.100.8/24 scope global mlx_0_0 00:28:28.837 valid_lft forever preferred_lft forever 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:28:28.837 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:28.837 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:28.837 altname enp217s0f1np1 00:28:28.837 altname ens818f1np1 00:28:28.837 inet 192.168.100.9/24 scope global mlx_0_1 00:28:28.837 valid_lft forever preferred_lft forever 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:28:28.837 192.168.100.9' 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:28:28.837 192.168.100.9' 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # head -n 1 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:28:28.837 192.168.100.9' 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # tail -n +2 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # head -n 1 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:28:28.837 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:28:28.838 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:28:28.838 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:28.838 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:28.838 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:28.838 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:28.838 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=432620 00:28:28.838 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:28.838 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 432620 00:28:28.838 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 432620 ']' 00:28:28.838 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.838 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:28.838 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.838 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:28.838 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:28.838 [2024-12-13 19:23:02.359274] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:28.838 [2024-12-13 19:23:02.359341] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.838 [2024-12-13 19:23:02.450661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:28.838 [2024-12-13 19:23:02.473747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:28.838 [2024-12-13 19:23:02.473779] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:28.838 [2024-12-13 19:23:02.473792] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:28.838 [2024-12-13 19:23:02.473800] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:28.838 [2024-12-13 19:23:02.473808] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:28.838 [2024-12-13 19:23:02.475265] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.838 [2024-12-13 19:23:02.475294] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:28.838 [2024-12-13 19:23:02.475379] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.838 [2024-12-13 19:23:02.475380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:28.838 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:28.838 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:28:28.838 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:28.838 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:28.838 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:28.838 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:28.838 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:28.838 19:23:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:31.373 19:23:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:31.373 19:23:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:31.632 19:23:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:28:31.632 19:23:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:31.891 19:23:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:31.891 19:23:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:28:31.891 19:23:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:31.891 19:23:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:28:31.891 19:23:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:28:32.150 [2024-12-13 19:23:06.269010] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:28:32.150 [2024-12-13 19:23:06.290057] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x220d400/0x20e4fc0) succeed. 00:28:32.150 [2024-12-13 19:23:06.299455] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x220faa0/0x2126660) succeed. 00:28:32.150 19:23:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:32.409 19:23:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:32.409 19:23:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:32.669 19:23:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:32.669 19:23:06 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:32.669 19:23:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:32.928 [2024-12-13 19:23:07.194353] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:32.928 19:23:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:28:33.188 19:23:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:28:33.188 19:23:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:28:33.188 19:23:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:33.188 19:23:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:28:34.567 Initializing NVMe Controllers 00:28:34.567 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:28:34.567 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:28:34.567 Initialization complete. Launching workers. 00:28:34.567 ======================================================== 00:28:34.567 Latency(us) 00:28:34.567 Device Information : IOPS MiB/s Average min max 00:28:34.567 PCIE (0000:d8:00.0) NSID 1 from core 0: 101203.83 395.33 315.82 33.93 4391.41 00:28:34.567 ======================================================== 00:28:34.567 Total : 101203.83 395.33 315.82 33.93 4391.41 00:28:34.567 00:28:34.567 19:23:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:37.855 Initializing NVMe Controllers 00:28:37.855 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:37.855 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:37.855 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:37.855 Initialization complete. Launching workers. 00:28:37.855 ======================================================== 00:28:37.855 Latency(us) 00:28:37.855 Device Information : IOPS MiB/s Average min max 00:28:37.855 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6619.51 25.86 150.25 49.00 4093.27 00:28:37.855 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5170.65 20.20 192.42 70.49 4107.62 00:28:37.855 ======================================================== 00:28:37.855 Total : 11790.17 46.06 168.74 49.00 4107.62 00:28:37.855 00:28:37.855 19:23:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:41.144 Initializing NVMe Controllers 00:28:41.144 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:41.144 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:41.144 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:41.144 Initialization complete. Launching workers. 00:28:41.144 ======================================================== 00:28:41.144 Latency(us) 00:28:41.144 Device Information : IOPS MiB/s Average min max 00:28:41.144 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18358.99 71.71 1741.34 500.57 5442.03 00:28:41.144 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4057.15 15.85 7942.26 5914.97 9060.91 00:28:41.144 ======================================================== 00:28:41.144 Total : 22416.14 87.56 2863.65 500.57 9060.91 00:28:41.144 00:28:41.144 19:23:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:28:41.144 19:23:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:46.418 Initializing NVMe Controllers 00:28:46.418 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:46.418 Controller IO queue size 128, less than required. 00:28:46.418 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:46.418 Controller IO queue size 128, less than required. 00:28:46.418 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:46.418 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:46.418 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:46.418 Initialization complete. Launching workers. 00:28:46.418 ======================================================== 00:28:46.418 Latency(us) 00:28:46.418 Device Information : IOPS MiB/s Average min max 00:28:46.418 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3997.28 999.32 32069.44 12843.64 90664.95 00:28:46.418 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4032.78 1008.19 31356.13 13383.32 77938.79 00:28:46.418 ======================================================== 00:28:46.418 Total : 8030.05 2007.51 31711.21 12843.64 90664.95 00:28:46.418 00:28:46.418 19:23:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:28:46.418 No valid NVMe controllers or AIO or URING devices found 00:28:46.418 Initializing NVMe Controllers 00:28:46.418 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:46.418 Controller IO queue size 128, less than required. 00:28:46.418 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:46.418 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:46.418 Controller IO queue size 128, less than required. 00:28:46.418 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:46.418 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:46.418 WARNING: Some requested NVMe devices were skipped 00:28:46.418 19:23:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:28:50.612 Initializing NVMe Controllers 00:28:50.612 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:50.612 Controller IO queue size 128, less than required. 00:28:50.612 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:50.612 Controller IO queue size 128, less than required. 00:28:50.612 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:50.612 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:50.612 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:50.612 Initialization complete. Launching workers. 00:28:50.612 00:28:50.612 ==================== 00:28:50.612 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:50.612 RDMA transport: 00:28:50.612 dev name: mlx5_0 00:28:50.612 polls: 402390 00:28:50.612 idle_polls: 398775 00:28:50.612 completions: 44662 00:28:50.612 queued_requests: 1 00:28:50.612 total_send_wrs: 22331 00:28:50.612 send_doorbell_updates: 3383 00:28:50.612 total_recv_wrs: 22458 00:28:50.612 recv_doorbell_updates: 3384 00:28:50.612 --------------------------------- 00:28:50.612 00:28:50.612 ==================== 00:28:50.612 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:50.612 RDMA transport: 00:28:50.612 dev name: mlx5_0 00:28:50.612 polls: 404611 00:28:50.612 idle_polls: 404331 00:28:50.612 completions: 20270 00:28:50.612 queued_requests: 1 00:28:50.612 total_send_wrs: 10135 00:28:50.612 send_doorbell_updates: 253 00:28:50.612 total_recv_wrs: 10262 00:28:50.612 recv_doorbell_updates: 254 00:28:50.612 --------------------------------- 00:28:50.612 ======================================================== 00:28:50.612 Latency(us) 00:28:50.612 Device Information : IOPS MiB/s Average min max 00:28:50.612 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5582.50 1395.62 22976.19 11325.66 71475.89 00:28:50.612 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2533.50 633.38 50302.12 30604.01 80591.46 00:28:50.612 ======================================================== 00:28:50.612 Total : 8116.00 2029.00 31506.28 11325.66 80591.46 00:28:50.612 00:28:50.612 19:23:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:50.612 19:23:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:50.612 19:23:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:50.612 19:23:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:28:50.612 19:23:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:57.477 19:23:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=91b00df6-8f2b-4ea5-b63a-578b15b3bb2f 00:28:57.477 19:23:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 91b00df6-8f2b-4ea5-b63a-578b15b3bb2f 00:28:57.477 19:23:30 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=91b00df6-8f2b-4ea5-b63a-578b15b3bb2f 00:28:57.477 19:23:30 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:28:57.477 19:23:30 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:28:57.477 19:23:30 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:28:57.477 19:23:30 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:57.477 19:23:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:28:57.477 { 00:28:57.477 "uuid": "91b00df6-8f2b-4ea5-b63a-578b15b3bb2f", 00:28:57.477 "name": "lvs_0", 00:28:57.477 "base_bdev": "Nvme0n1", 00:28:57.477 "total_data_clusters": 476466, 00:28:57.477 "free_clusters": 476466, 00:28:57.477 "block_size": 512, 00:28:57.477 "cluster_size": 4194304 00:28:57.477 } 00:28:57.477 ]' 00:28:57.477 19:23:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="91b00df6-8f2b-4ea5-b63a-578b15b3bb2f") .free_clusters' 00:28:57.477 19:23:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=476466 00:28:57.477 19:23:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="91b00df6-8f2b-4ea5-b63a-578b15b3bb2f") .cluster_size' 00:28:57.477 19:23:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:28:57.477 19:23:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=1905864 00:28:57.477 19:23:31 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 1905864 00:28:57.477 1905864 00:28:57.477 19:23:31 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:28:57.477 19:23:31 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:57.477 19:23:31 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 91b00df6-8f2b-4ea5-b63a-578b15b3bb2f lbd_0 20480 00:28:57.477 19:23:31 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=b670ee13-a75d-4537-8cbc-43dc7ff726b2 00:28:57.477 19:23:31 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore b670ee13-a75d-4537-8cbc-43dc7ff726b2 lvs_n_0 00:28:59.383 19:23:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=92e57ffc-1a6f-453f-bd0b-16ef61d27146 00:28:59.383 19:23:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 92e57ffc-1a6f-453f-bd0b-16ef61d27146 00:28:59.383 19:23:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=92e57ffc-1a6f-453f-bd0b-16ef61d27146 00:28:59.383 19:23:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:28:59.383 19:23:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:28:59.383 19:23:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:28:59.383 19:23:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:59.643 19:23:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:28:59.643 { 00:28:59.643 "uuid": "91b00df6-8f2b-4ea5-b63a-578b15b3bb2f", 00:28:59.643 "name": "lvs_0", 00:28:59.643 "base_bdev": "Nvme0n1", 00:28:59.643 "total_data_clusters": 476466, 00:28:59.643 "free_clusters": 471346, 00:28:59.643 "block_size": 512, 00:28:59.643 "cluster_size": 4194304 00:28:59.643 }, 00:28:59.643 { 00:28:59.643 "uuid": "92e57ffc-1a6f-453f-bd0b-16ef61d27146", 00:28:59.643 "name": "lvs_n_0", 00:28:59.643 "base_bdev": "b670ee13-a75d-4537-8cbc-43dc7ff726b2", 00:28:59.643 "total_data_clusters": 5114, 00:28:59.643 "free_clusters": 5114, 00:28:59.643 "block_size": 512, 00:28:59.643 "cluster_size": 4194304 00:28:59.643 } 00:28:59.643 ]' 00:28:59.643 19:23:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="92e57ffc-1a6f-453f-bd0b-16ef61d27146") .free_clusters' 00:28:59.643 19:23:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:28:59.643 19:23:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="92e57ffc-1a6f-453f-bd0b-16ef61d27146") .cluster_size' 00:28:59.643 19:23:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:28:59.643 19:23:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:28:59.643 19:23:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:28:59.643 20456 00:28:59.643 19:23:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:59.643 19:23:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 92e57ffc-1a6f-453f-bd0b-16ef61d27146 lbd_nest_0 20456 00:28:59.902 19:23:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=3cd78542-caea-4a67-91b5-26d49242a22d 00:28:59.902 19:23:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:00.161 19:23:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:00.161 19:23:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 3cd78542-caea-4a67-91b5-26d49242a22d 00:29:00.420 19:23:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:00.420 19:23:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:00.420 19:23:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:00.420 19:23:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:00.420 19:23:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:00.420 19:23:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:12.634 Initializing NVMe Controllers 00:29:12.634 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:12.634 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:12.634 Initialization complete. Launching workers. 00:29:12.634 ======================================================== 00:29:12.634 Latency(us) 00:29:12.634 Device Information : IOPS MiB/s Average min max 00:29:12.634 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5732.10 2.80 173.81 70.48 8070.71 00:29:12.634 ======================================================== 00:29:12.634 Total : 5732.10 2.80 173.81 70.48 8070.71 00:29:12.634 00:29:12.634 19:23:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:12.634 19:23:46 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:24.843 Initializing NVMe Controllers 00:29:24.843 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:24.843 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:24.843 Initialization complete. Launching workers. 00:29:24.843 ======================================================== 00:29:24.843 Latency(us) 00:29:24.843 Device Information : IOPS MiB/s Average min max 00:29:24.843 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2640.80 330.10 378.29 157.36 8202.99 00:29:24.843 ======================================================== 00:29:24.843 Total : 2640.80 330.10 378.29 157.36 8202.99 00:29:24.843 00:29:24.843 19:23:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:24.843 19:23:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:24.844 19:23:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:34.825 Initializing NVMe Controllers 00:29:34.825 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:34.825 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:34.825 Initialization complete. Launching workers. 00:29:34.825 ======================================================== 00:29:34.825 Latency(us) 00:29:34.825 Device Information : IOPS MiB/s Average min max 00:29:34.825 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11225.50 5.48 2850.10 947.24 10160.67 00:29:34.825 ======================================================== 00:29:34.825 Total : 11225.50 5.48 2850.10 947.24 10160.67 00:29:34.825 00:29:34.825 19:24:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:34.825 19:24:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:47.036 Initializing NVMe Controllers 00:29:47.036 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:47.036 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:47.036 Initialization complete. Launching workers. 00:29:47.036 ======================================================== 00:29:47.036 Latency(us) 00:29:47.036 Device Information : IOPS MiB/s Average min max 00:29:47.036 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4002.40 500.30 8000.47 4906.68 16018.47 00:29:47.036 ======================================================== 00:29:47.036 Total : 4002.40 500.30 8000.47 4906.68 16018.47 00:29:47.036 00:29:47.036 19:24:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:47.036 19:24:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:47.036 19:24:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:59.245 Initializing NVMe Controllers 00:29:59.245 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:59.245 Controller IO queue size 128, less than required. 00:29:59.245 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:59.245 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:59.245 Initialization complete. Launching workers. 00:29:59.245 ======================================================== 00:29:59.245 Latency(us) 00:29:59.245 Device Information : IOPS MiB/s Average min max 00:29:59.245 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18698.50 9.13 6847.57 1993.50 15402.49 00:29:59.245 ======================================================== 00:29:59.245 Total : 18698.50 9.13 6847.57 1993.50 15402.49 00:29:59.245 00:29:59.245 19:24:31 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:59.245 19:24:31 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:30:09.228 Initializing NVMe Controllers 00:30:09.228 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:09.228 Controller IO queue size 128, less than required. 00:30:09.228 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:09.228 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:09.228 Initialization complete. Launching workers. 00:30:09.228 ======================================================== 00:30:09.228 Latency(us) 00:30:09.228 Device Information : IOPS MiB/s Average min max 00:30:09.228 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10984.40 1373.05 11652.36 3465.75 24860.32 00:30:09.228 ======================================================== 00:30:09.228 Total : 10984.40 1373.05 11652.36 3465.75 24860.32 00:30:09.228 00:30:09.228 19:24:42 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:09.228 19:24:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3cd78542-caea-4a67-91b5-26d49242a22d 00:30:09.487 19:24:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:09.747 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b670ee13-a75d-4537-8cbc-43dc7ff726b2 00:30:10.005 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:10.265 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:10.265 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:10.265 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:10.265 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:30:10.265 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:30:10.265 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:30:10.265 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:30:10.265 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:10.265 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:30:10.265 rmmod nvme_rdma 00:30:10.265 rmmod nvme_fabrics 00:30:10.265 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:10.265 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:30:10.265 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:30:10.265 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 432620 ']' 00:30:10.265 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 432620 00:30:10.265 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 432620 ']' 00:30:10.265 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 432620 00:30:10.265 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:30:10.265 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:10.265 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 432620 00:30:10.265 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:10.265 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:10.265 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 432620' 00:30:10.265 killing process with pid 432620 00:30:10.265 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 432620 00:30:10.265 19:24:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 432620 00:30:12.799 19:24:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:12.799 19:24:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:30:12.799 00:30:12.799 real 1m52.305s 00:30:12.799 user 7m2.551s 00:30:12.799 sys 0m7.604s 00:30:12.799 19:24:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:12.799 19:24:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:12.799 ************************************ 00:30:12.799 END TEST nvmf_perf 00:30:12.799 ************************************ 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.059 ************************************ 00:30:13.059 START TEST nvmf_fio_host 00:30:13.059 ************************************ 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:30:13.059 * Looking for test storage... 00:30:13.059 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:13.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.059 --rc genhtml_branch_coverage=1 00:30:13.059 --rc genhtml_function_coverage=1 00:30:13.059 --rc genhtml_legend=1 00:30:13.059 --rc geninfo_all_blocks=1 00:30:13.059 --rc geninfo_unexecuted_blocks=1 00:30:13.059 00:30:13.059 ' 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:13.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.059 --rc genhtml_branch_coverage=1 00:30:13.059 --rc genhtml_function_coverage=1 00:30:13.059 --rc genhtml_legend=1 00:30:13.059 --rc geninfo_all_blocks=1 00:30:13.059 --rc geninfo_unexecuted_blocks=1 00:30:13.059 00:30:13.059 ' 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:13.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.059 --rc genhtml_branch_coverage=1 00:30:13.059 --rc genhtml_function_coverage=1 00:30:13.059 --rc genhtml_legend=1 00:30:13.059 --rc geninfo_all_blocks=1 00:30:13.059 --rc geninfo_unexecuted_blocks=1 00:30:13.059 00:30:13.059 ' 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:13.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.059 --rc genhtml_branch_coverage=1 00:30:13.059 --rc genhtml_function_coverage=1 00:30:13.059 --rc genhtml_legend=1 00:30:13.059 --rc geninfo_all_blocks=1 00:30:13.059 --rc geninfo_unexecuted_blocks=1 00:30:13.059 00:30:13.059 ' 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.059 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:13.060 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.060 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:13.319 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:30:13.319 19:24:47 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:21.443 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:30:21.444 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:30:21.444 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:30:21.444 Found net devices under 0000:d9:00.0: mlx_0_0 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:30:21.444 Found net devices under 0000:d9:00.1: mlx_0_1 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # rdma_device_init 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:30:21.444 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:21.444 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:30:21.444 altname enp217s0f0np0 00:30:21.444 altname ens818f0np0 00:30:21.444 inet 192.168.100.8/24 scope global mlx_0_0 00:30:21.444 valid_lft forever preferred_lft forever 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:30:21.444 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:21.444 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:30:21.444 altname enp217s0f1np1 00:30:21.444 altname ens818f1np1 00:30:21.444 inet 192.168.100.9/24 scope global mlx_0_1 00:30:21.444 valid_lft forever preferred_lft forever 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:30:21.444 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:30:21.445 192.168.100.9' 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:30:21.445 192.168.100.9' 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # head -n 1 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:30:21.445 192.168.100.9' 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # tail -n +2 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # head -n 1 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=453891 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 453891 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 453891 ']' 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.445 [2024-12-13 19:24:54.759747] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:30:21.445 [2024-12-13 19:24:54.759811] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:21.445 [2024-12-13 19:24:54.852472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:21.445 [2024-12-13 19:24:54.874795] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:21.445 [2024-12-13 19:24:54.874832] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:21.445 [2024-12-13 19:24:54.874842] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:21.445 [2024-12-13 19:24:54.874850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:21.445 [2024-12-13 19:24:54.874856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:21.445 [2024-12-13 19:24:54.876650] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.445 [2024-12-13 19:24:54.876763] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:21.445 [2024-12-13 19:24:54.876885] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.445 [2024-12-13 19:24:54.876887] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:30:21.445 19:24:54 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:30:21.445 [2024-12-13 19:24:55.167629] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb02540/0xb069f0) succeed. 00:30:21.445 [2024-12-13 19:24:55.176917] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb03b80/0xb48090) succeed. 00:30:21.445 19:24:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:21.445 19:24:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:21.445 19:24:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.445 19:24:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:21.445 Malloc1 00:30:21.445 19:24:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:21.445 19:24:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:21.704 19:24:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:21.964 [2024-12-13 19:24:56.165699] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:21.964 19:24:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:30:22.223 19:24:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:30:22.223 19:24:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:30:22.223 19:24:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:30:22.223 19:24:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:22.223 19:24:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:22.223 19:24:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:22.223 19:24:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:30:22.223 19:24:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:30:22.223 19:24:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:22.223 19:24:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:22.223 19:24:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:30:22.223 19:24:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:30:22.223 19:24:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:22.223 19:24:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:22.223 19:24:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:22.223 19:24:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:22.223 19:24:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:30:22.223 19:24:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:30:22.223 19:24:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:22.223 19:24:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:22.223 19:24:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:22.223 19:24:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:22.223 19:24:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:30:22.481 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:22.481 fio-3.35 00:30:22.481 Starting 1 thread 00:30:25.048 00:30:25.048 test: (groupid=0, jobs=1): err= 0: pid=454411: Fri Dec 13 19:24:59 2024 00:30:25.048 read: IOPS=17.7k, BW=69.3MiB/s (72.6MB/s)(139MiB/2004msec) 00:30:25.048 slat (nsec): min=1343, max=38386, avg=1492.36, stdev=484.31 00:30:25.048 clat (usec): min=2299, max=6519, avg=3585.57, stdev=81.48 00:30:25.048 lat (usec): min=2321, max=6520, avg=3587.06, stdev=81.40 00:30:25.048 clat percentiles (usec): 00:30:25.048 | 1.00th=[ 3556], 5.00th=[ 3556], 10.00th=[ 3556], 20.00th=[ 3589], 00:30:25.048 | 30.00th=[ 3589], 40.00th=[ 3589], 50.00th=[ 3589], 60.00th=[ 3589], 00:30:25.048 | 70.00th=[ 3589], 80.00th=[ 3589], 90.00th=[ 3589], 95.00th=[ 3621], 00:30:25.048 | 99.00th=[ 3621], 99.50th=[ 3752], 99.90th=[ 4293], 99.95th=[ 5604], 00:30:25.048 | 99.99th=[ 6521] 00:30:25.048 bw ( KiB/s): min=69640, max=71528, per=100.00%, avg=70956.00, stdev=883.25, samples=4 00:30:25.048 iops : min=17410, max=17882, avg=17739.00, stdev=220.81, samples=4 00:30:25.048 write: IOPS=17.7k, BW=69.3MiB/s (72.6MB/s)(139MiB/2004msec); 0 zone resets 00:30:25.048 slat (nsec): min=1378, max=17114, avg=1566.33, stdev=477.79 00:30:25.048 clat (usec): min=2332, max=6537, avg=3584.61, stdev=86.14 00:30:25.048 lat (usec): min=2344, max=6538, avg=3586.17, stdev=86.07 00:30:25.048 clat percentiles (usec): 00:30:25.048 | 1.00th=[ 3556], 5.00th=[ 3556], 10.00th=[ 3556], 20.00th=[ 3556], 00:30:25.048 | 30.00th=[ 3589], 40.00th=[ 3589], 50.00th=[ 3589], 60.00th=[ 3589], 00:30:25.048 | 70.00th=[ 3589], 80.00th=[ 3589], 90.00th=[ 3589], 95.00th=[ 3621], 00:30:25.048 | 99.00th=[ 3621], 99.50th=[ 3752], 99.90th=[ 4752], 99.95th=[ 5604], 00:30:25.048 | 99.99th=[ 6521] 00:30:25.048 bw ( KiB/s): min=69624, max=71432, per=100.00%, avg=70936.00, stdev=875.87, samples=4 00:30:25.048 iops : min=17406, max=17858, avg=17734.00, stdev=218.97, samples=4 00:30:25.048 lat (msec) : 4=99.83%, 10=0.17% 00:30:25.048 cpu : usr=99.40%, sys=0.15%, ctx=15, majf=0, minf=2 00:30:25.048 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:30:25.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:25.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:25.048 issued rwts: total=35534,35539,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:25.048 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:25.048 00:30:25.048 Run status group 0 (all jobs): 00:30:25.048 READ: bw=69.3MiB/s (72.6MB/s), 69.3MiB/s-69.3MiB/s (72.6MB/s-72.6MB/s), io=139MiB (146MB), run=2004-2004msec 00:30:25.048 WRITE: bw=69.3MiB/s (72.6MB/s), 69.3MiB/s-69.3MiB/s (72.6MB/s-72.6MB/s), io=139MiB (146MB), run=2004-2004msec 00:30:25.048 19:24:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:30:25.048 19:24:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:30:25.048 19:24:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:25.048 19:24:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:25.048 19:24:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:25.048 19:24:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:30:25.048 19:24:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:30:25.048 19:24:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:25.048 19:24:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:25.048 19:24:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:30:25.048 19:24:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:30:25.048 19:24:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:25.048 19:24:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:25.048 19:24:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:25.048 19:24:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:25.048 19:24:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:30:25.048 19:24:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:30:25.048 19:24:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:25.048 19:24:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:25.048 19:24:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:25.048 19:24:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:25.048 19:24:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:30:25.314 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:25.314 fio-3.35 00:30:25.314 Starting 1 thread 00:30:27.842 00:30:27.842 test: (groupid=0, jobs=1): err= 0: pid=454977: Fri Dec 13 19:25:01 2024 00:30:27.842 read: IOPS=14.3k, BW=223MiB/s (234MB/s)(443MiB/1984msec) 00:30:27.842 slat (nsec): min=2246, max=50167, avg=2597.35, stdev=954.20 00:30:27.842 clat (usec): min=485, max=8643, avg=1690.33, stdev=1381.36 00:30:27.842 lat (usec): min=488, max=8664, avg=1692.93, stdev=1381.64 00:30:27.842 clat percentiles (usec): 00:30:27.842 | 1.00th=[ 693], 5.00th=[ 783], 10.00th=[ 840], 20.00th=[ 922], 00:30:27.842 | 30.00th=[ 988], 40.00th=[ 1057], 50.00th=[ 1172], 60.00th=[ 1287], 00:30:27.842 | 70.00th=[ 1418], 80.00th=[ 1647], 90.00th=[ 4883], 95.00th=[ 4948], 00:30:27.842 | 99.00th=[ 6259], 99.50th=[ 6849], 99.90th=[ 7439], 99.95th=[ 7570], 00:30:27.842 | 99.99th=[ 8586] 00:30:27.842 bw ( KiB/s): min=111168, max=115584, per=49.34%, avg=112904.00, stdev=1932.12, samples=4 00:30:27.842 iops : min= 6948, max= 7224, avg=7056.50, stdev=120.76, samples=4 00:30:27.842 write: IOPS=8248, BW=129MiB/s (135MB/s)(230MiB/1788msec); 0 zone resets 00:30:27.842 slat (usec): min=26, max=115, avg=28.99, stdev= 5.17 00:30:27.842 clat (usec): min=4023, max=21410, avg=12610.14, stdev=1797.89 00:30:27.842 lat (usec): min=4052, max=21438, avg=12639.12, stdev=1797.70 00:30:27.842 clat percentiles (usec): 00:30:27.842 | 1.00th=[ 8094], 5.00th=[10028], 10.00th=[10552], 20.00th=[11338], 00:30:27.842 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12518], 60.00th=[12911], 00:30:27.842 | 70.00th=[13435], 80.00th=[14091], 90.00th=[14877], 95.00th=[15664], 00:30:27.842 | 99.00th=[17171], 99.50th=[17695], 99.90th=[20317], 99.95th=[20841], 00:30:27.842 | 99.99th=[21365] 00:30:27.842 bw ( KiB/s): min=113280, max=119328, per=88.77%, avg=117160.00, stdev=2684.62, samples=4 00:30:27.842 iops : min= 7080, max= 7458, avg=7322.50, stdev=167.79, samples=4 00:30:27.842 lat (usec) : 500=0.01%, 750=2.15%, 1000=18.66% 00:30:27.842 lat (msec) : 2=35.09%, 4=1.87%, 10=9.75%, 20=32.42%, 50=0.05% 00:30:27.842 cpu : usr=96.96%, sys=1.35%, ctx=183, majf=0, minf=2 00:30:27.842 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:30:27.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:27.842 issued rwts: total=28375,14749,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.842 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:27.842 00:30:27.842 Run status group 0 (all jobs): 00:30:27.842 READ: bw=223MiB/s (234MB/s), 223MiB/s-223MiB/s (234MB/s-234MB/s), io=443MiB (465MB), run=1984-1984msec 00:30:27.842 WRITE: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=230MiB (242MB), run=1788-1788msec 00:30:27.842 19:25:01 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:27.842 19:25:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:27.842 19:25:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:27.842 19:25:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:27.842 19:25:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:27.842 19:25:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:30:27.842 19:25:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:27.842 19:25:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:27.842 19:25:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:27.842 19:25:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:27.842 19:25:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:30:27.842 19:25:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 192.168.100.8 00:30:31.128 Nvme0n1 00:30:31.128 19:25:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:36.401 19:25:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=c1bfa528-64da-457c-9e63-a56238740857 00:30:36.401 19:25:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb c1bfa528-64da-457c-9e63-a56238740857 00:30:36.401 19:25:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=c1bfa528-64da-457c-9e63-a56238740857 00:30:36.401 19:25:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:36.401 19:25:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:30:36.401 19:25:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:30:36.401 19:25:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:36.660 19:25:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:36.660 { 00:30:36.660 "uuid": "c1bfa528-64da-457c-9e63-a56238740857", 00:30:36.660 "name": "lvs_0", 00:30:36.660 "base_bdev": "Nvme0n1", 00:30:36.660 "total_data_clusters": 1862, 00:30:36.660 "free_clusters": 1862, 00:30:36.660 "block_size": 512, 00:30:36.661 "cluster_size": 1073741824 00:30:36.661 } 00:30:36.661 ]' 00:30:36.661 19:25:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="c1bfa528-64da-457c-9e63-a56238740857") .free_clusters' 00:30:36.661 19:25:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1862 00:30:36.661 19:25:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="c1bfa528-64da-457c-9e63-a56238740857") .cluster_size' 00:30:36.661 19:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:30:36.661 19:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=1906688 00:30:36.661 19:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 1906688 00:30:36.661 1906688 00:30:36.661 19:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1906688 00:30:37.229 159aae2c-8df0-4200-a2af-5aec6e424b87 00:30:37.229 19:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:37.489 19:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:37.748 19:25:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:30:38.007 19:25:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:30:38.007 19:25:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:30:38.007 19:25:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:38.007 19:25:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:38.007 19:25:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:38.007 19:25:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:30:38.007 19:25:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:30:38.007 19:25:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:38.007 19:25:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:38.007 19:25:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:30:38.007 19:25:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:30:38.007 19:25:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:38.007 19:25:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:38.007 19:25:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:38.007 19:25:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:38.007 19:25:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:30:38.007 19:25:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:30:38.007 19:25:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:38.007 19:25:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:38.007 19:25:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:38.007 19:25:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:38.007 19:25:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:30:38.268 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:38.268 fio-3.35 00:30:38.268 Starting 1 thread 00:30:40.804 00:30:40.804 test: (groupid=0, jobs=1): err= 0: pid=457266: Fri Dec 13 19:25:14 2024 00:30:40.804 read: IOPS=9848, BW=38.5MiB/s (40.3MB/s)(77.1MiB/2005msec) 00:30:40.805 slat (nsec): min=1359, max=19736, avg=1479.83, stdev=250.19 00:30:40.805 clat (usec): min=197, max=332568, avg=6443.21, stdev=18697.16 00:30:40.805 lat (usec): min=198, max=332571, avg=6444.69, stdev=18697.20 00:30:40.805 clat percentiles (msec): 00:30:40.805 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:30:40.805 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:30:40.805 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:30:40.805 | 99.00th=[ 7], 99.50th=[ 8], 99.90th=[ 334], 99.95th=[ 334], 00:30:40.805 | 99.99th=[ 334] 00:30:40.805 bw ( KiB/s): min=14658, max=47792, per=99.88%, avg=39350.50, stdev=16462.69, samples=4 00:30:40.805 iops : min= 3664, max=11948, avg=9837.50, stdev=4115.92, samples=4 00:30:40.805 write: IOPS=9861, BW=38.5MiB/s (40.4MB/s)(77.2MiB/2005msec); 0 zone resets 00:30:40.805 slat (nsec): min=1398, max=17281, avg=1538.68, stdev=278.54 00:30:40.805 clat (usec): min=158, max=332899, avg=6407.69, stdev=18175.68 00:30:40.805 lat (usec): min=160, max=332902, avg=6409.23, stdev=18175.74 00:30:40.805 clat percentiles (msec): 00:30:40.805 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:30:40.805 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:30:40.805 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:30:40.805 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 334], 99.95th=[ 334], 00:30:40.805 | 99.99th=[ 334] 00:30:40.805 bw ( KiB/s): min=15329, max=47520, per=99.93%, avg=39418.25, stdev=16059.61, samples=4 00:30:40.805 iops : min= 3832, max=11880, avg=9854.50, stdev=4015.03, samples=4 00:30:40.805 lat (usec) : 250=0.02%, 500=0.01%, 750=0.01%, 1000=0.03% 00:30:40.805 lat (msec) : 2=0.03%, 4=0.28%, 10=99.31%, 500=0.32% 00:30:40.805 cpu : usr=99.40%, sys=0.10%, ctx=15, majf=0, minf=2 00:30:40.805 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:40.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:40.805 issued rwts: total=19747,19772,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.805 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:40.805 00:30:40.805 Run status group 0 (all jobs): 00:30:40.805 READ: bw=38.5MiB/s (40.3MB/s), 38.5MiB/s-38.5MiB/s (40.3MB/s-40.3MB/s), io=77.1MiB (80.9MB), run=2005-2005msec 00:30:40.805 WRITE: bw=38.5MiB/s (40.4MB/s), 38.5MiB/s-38.5MiB/s (40.4MB/s-40.4MB/s), io=77.2MiB (81.0MB), run=2005-2005msec 00:30:40.805 19:25:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:40.805 19:25:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:42.182 19:25:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=b149cbbd-74ac-43ce-8491-24b083d89c82 00:30:42.182 19:25:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb b149cbbd-74ac-43ce-8491-24b083d89c82 00:30:42.182 19:25:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=b149cbbd-74ac-43ce-8491-24b083d89c82 00:30:42.182 19:25:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:42.182 19:25:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:30:42.182 19:25:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:30:42.182 19:25:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:42.182 19:25:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:42.182 { 00:30:42.182 "uuid": "c1bfa528-64da-457c-9e63-a56238740857", 00:30:42.182 "name": "lvs_0", 00:30:42.182 "base_bdev": "Nvme0n1", 00:30:42.182 "total_data_clusters": 1862, 00:30:42.182 "free_clusters": 0, 00:30:42.182 "block_size": 512, 00:30:42.182 "cluster_size": 1073741824 00:30:42.182 }, 00:30:42.182 { 00:30:42.182 "uuid": "b149cbbd-74ac-43ce-8491-24b083d89c82", 00:30:42.182 "name": "lvs_n_0", 00:30:42.182 "base_bdev": "159aae2c-8df0-4200-a2af-5aec6e424b87", 00:30:42.182 "total_data_clusters": 476206, 00:30:42.182 "free_clusters": 476206, 00:30:42.182 "block_size": 512, 00:30:42.182 "cluster_size": 4194304 00:30:42.182 } 00:30:42.182 ]' 00:30:42.182 19:25:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="b149cbbd-74ac-43ce-8491-24b083d89c82") .free_clusters' 00:30:42.182 19:25:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=476206 00:30:42.182 19:25:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="b149cbbd-74ac-43ce-8491-24b083d89c82") .cluster_size' 00:30:42.441 19:25:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:42.441 19:25:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=1904824 00:30:42.441 19:25:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 1904824 00:30:42.441 1904824 00:30:42.441 19:25:16 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:30:43.378 0fb6831c-1338-4504-9487-6f6da77d341d 00:30:43.378 19:25:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:43.378 19:25:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:43.637 19:25:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:30:43.896 19:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:30:43.896 19:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:30:43.896 19:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:43.896 19:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:43.896 19:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:43.896 19:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:30:43.896 19:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:30:43.896 19:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:43.896 19:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:43.896 19:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:30:43.896 19:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:30:43.896 19:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:43.896 19:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:43.896 19:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:43.896 19:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:43.896 19:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:30:43.896 19:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:30:43.896 19:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:43.896 19:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:43.896 19:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:43.896 19:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:43.896 19:25:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:30:44.155 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:44.155 fio-3.35 00:30:44.155 Starting 1 thread 00:30:46.690 00:30:46.690 test: (groupid=0, jobs=1): err= 0: pid=458456: Fri Dec 13 19:25:20 2024 00:30:46.690 read: IOPS=10.0k, BW=39.1MiB/s (41.0MB/s)(78.4MiB/2005msec) 00:30:46.690 slat (nsec): min=1347, max=19517, avg=1473.71, stdev=345.73 00:30:46.690 clat (usec): min=3125, max=11092, avg=6317.16, stdev=216.23 00:30:46.690 lat (usec): min=3128, max=11094, avg=6318.63, stdev=216.20 00:30:46.690 clat percentiles (usec): 00:30:46.690 | 1.00th=[ 5604], 5.00th=[ 6259], 10.00th=[ 6259], 20.00th=[ 6259], 00:30:46.690 | 30.00th=[ 6325], 40.00th=[ 6325], 50.00th=[ 6325], 60.00th=[ 6325], 00:30:46.690 | 70.00th=[ 6325], 80.00th=[ 6325], 90.00th=[ 6390], 95.00th=[ 6390], 00:30:46.690 | 99.00th=[ 6980], 99.50th=[ 7046], 99.90th=[ 9372], 99.95th=[10290], 00:30:46.690 | 99.99th=[11076] 00:30:46.690 bw ( KiB/s): min=38648, max=40760, per=99.92%, avg=40006.00, stdev=941.61, samples=4 00:30:46.690 iops : min= 9662, max=10190, avg=10001.50, stdev=235.40, samples=4 00:30:46.690 write: IOPS=10.0k, BW=39.1MiB/s (41.0MB/s)(78.5MiB/2005msec); 0 zone resets 00:30:46.690 slat (nsec): min=1380, max=17501, avg=1544.45, stdev=336.05 00:30:46.690 clat (usec): min=3127, max=11125, avg=6336.92, stdev=219.82 00:30:46.690 lat (usec): min=3130, max=11126, avg=6338.46, stdev=219.80 00:30:46.690 clat percentiles (usec): 00:30:46.690 | 1.00th=[ 5604], 5.00th=[ 6259], 10.00th=[ 6259], 20.00th=[ 6325], 00:30:46.690 | 30.00th=[ 6325], 40.00th=[ 6325], 50.00th=[ 6325], 60.00th=[ 6325], 00:30:46.690 | 70.00th=[ 6325], 80.00th=[ 6390], 90.00th=[ 6390], 95.00th=[ 6390], 00:30:46.690 | 99.00th=[ 7046], 99.50th=[ 7046], 99.90th=[ 9503], 99.95th=[10290], 00:30:46.690 | 99.99th=[11076] 00:30:46.690 bw ( KiB/s): min=39056, max=40520, per=99.95%, avg=40060.00, stdev=678.65, samples=4 00:30:46.690 iops : min= 9764, max=10130, avg=10015.00, stdev=169.66, samples=4 00:30:46.690 lat (msec) : 4=0.04%, 10=99.88%, 20=0.08% 00:30:46.690 cpu : usr=99.50%, sys=0.15%, ctx=26, majf=0, minf=2 00:30:46.690 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:46.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:46.690 issued rwts: total=20069,20091,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.690 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:46.690 00:30:46.690 Run status group 0 (all jobs): 00:30:46.690 READ: bw=39.1MiB/s (41.0MB/s), 39.1MiB/s-39.1MiB/s (41.0MB/s-41.0MB/s), io=78.4MiB (82.2MB), run=2005-2005msec 00:30:46.690 WRITE: bw=39.1MiB/s (41.0MB/s), 39.1MiB/s-39.1MiB/s (41.0MB/s-41.0MB/s), io=78.5MiB (82.3MB), run=2005-2005msec 00:30:46.690 19:25:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:46.949 19:25:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:46.949 19:25:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:55.070 19:25:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:55.070 19:25:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:00.345 19:25:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:00.345 19:25:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:31:03.638 rmmod nvme_rdma 00:31:03.638 rmmod nvme_fabrics 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 453891 ']' 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 453891 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 453891 ']' 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 453891 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 453891 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 453891' 00:31:03.638 killing process with pid 453891 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 453891 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 453891 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:31:03.638 00:31:03.638 real 0m50.552s 00:31:03.638 user 3m38.897s 00:31:03.638 sys 0m8.194s 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.638 ************************************ 00:31:03.638 END TEST nvmf_fio_host 00:31:03.638 ************************************ 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.638 ************************************ 00:31:03.638 START TEST nvmf_failover 00:31:03.638 ************************************ 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:31:03.638 * Looking for test storage... 00:31:03.638 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:31:03.638 19:25:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:03.898 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:03.898 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:03.898 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:03.898 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:03.898 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:31:03.898 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:31:03.898 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:31:03.898 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:31:03.898 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:31:03.898 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:31:03.898 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:31:03.898 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:03.898 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:31:03.898 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:31:03.898 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:03.898 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:03.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.899 --rc genhtml_branch_coverage=1 00:31:03.899 --rc genhtml_function_coverage=1 00:31:03.899 --rc genhtml_legend=1 00:31:03.899 --rc geninfo_all_blocks=1 00:31:03.899 --rc geninfo_unexecuted_blocks=1 00:31:03.899 00:31:03.899 ' 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:03.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.899 --rc genhtml_branch_coverage=1 00:31:03.899 --rc genhtml_function_coverage=1 00:31:03.899 --rc genhtml_legend=1 00:31:03.899 --rc geninfo_all_blocks=1 00:31:03.899 --rc geninfo_unexecuted_blocks=1 00:31:03.899 00:31:03.899 ' 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:03.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.899 --rc genhtml_branch_coverage=1 00:31:03.899 --rc genhtml_function_coverage=1 00:31:03.899 --rc genhtml_legend=1 00:31:03.899 --rc geninfo_all_blocks=1 00:31:03.899 --rc geninfo_unexecuted_blocks=1 00:31:03.899 00:31:03.899 ' 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:03.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.899 --rc genhtml_branch_coverage=1 00:31:03.899 --rc genhtml_function_coverage=1 00:31:03.899 --rc genhtml_legend=1 00:31:03.899 --rc geninfo_all_blocks=1 00:31:03.899 --rc geninfo_unexecuted_blocks=1 00:31:03.899 00:31:03.899 ' 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:03.899 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:31:03.899 19:25:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:31:12.025 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:12.025 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:31:12.026 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:31:12.026 Found net devices under 0000:d9:00.0: mlx_0_0 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:31:12.026 Found net devices under 0000:d9:00.1: mlx_0_1 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:12.026 19:25:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # rdma_device_init 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@530 -- # allocate_nic_ips 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:31:12.026 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:12.026 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:31:12.026 altname enp217s0f0np0 00:31:12.026 altname ens818f0np0 00:31:12.026 inet 192.168.100.8/24 scope global mlx_0_0 00:31:12.026 valid_lft forever preferred_lft forever 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:31:12.026 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:12.026 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:31:12.026 altname enp217s0f1np1 00:31:12.026 altname ens818f1np1 00:31:12.026 inet 192.168.100.9/24 scope global mlx_0_1 00:31:12.026 valid_lft forever preferred_lft forever 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:12.026 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:31:12.027 192.168.100.9' 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:31:12.027 192.168.100.9' 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # head -n 1 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:31:12.027 192.168.100.9' 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # tail -n +2 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # head -n 1 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=464965 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 464965 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 464965 ']' 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:12.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:12.027 [2024-12-13 19:25:45.305365] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:31:12.027 [2024-12-13 19:25:45.305418] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:12.027 [2024-12-13 19:25:45.402105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:12.027 [2024-12-13 19:25:45.424098] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:12.027 [2024-12-13 19:25:45.424135] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:12.027 [2024-12-13 19:25:45.424144] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:12.027 [2024-12-13 19:25:45.424153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:12.027 [2024-12-13 19:25:45.424159] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:12.027 [2024-12-13 19:25:45.425648] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:12.027 [2024-12-13 19:25:45.425756] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:12.027 [2024-12-13 19:25:45.425758] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:31:12.027 [2024-12-13 19:25:45.760107] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x540c40/0x5450f0) succeed. 00:31:12.027 [2024-12-13 19:25:45.769185] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5421e0/0x586790) succeed. 00:31:12.027 19:25:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:12.027 Malloc0 00:31:12.027 19:25:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:12.027 19:25:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:12.286 19:25:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:12.286 [2024-12-13 19:25:46.660782] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:12.545 19:25:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:31:12.545 [2024-12-13 19:25:46.865238] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:31:12.545 19:25:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:31:12.805 [2024-12-13 19:25:47.069973] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:31:12.805 19:25:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=465347 00:31:12.805 19:25:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:12.805 19:25:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:12.805 19:25:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 465347 /var/tmp/bdevperf.sock 00:31:12.805 19:25:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 465347 ']' 00:31:12.805 19:25:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:12.805 19:25:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:12.805 19:25:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:12.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:12.805 19:25:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:12.805 19:25:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:13.064 19:25:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:13.064 19:25:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:31:13.064 19:25:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:13.323 NVMe0n1 00:31:13.323 19:25:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:13.582 00:31:13.582 19:25:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=465359 00:31:13.582 19:25:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:13.582 19:25:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:14.520 19:25:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:14.778 19:25:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:18.067 19:25:52 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:18.067 00:31:18.067 19:25:52 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:31:18.326 19:25:52 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:21.616 19:25:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:21.616 [2024-12-13 19:25:55.748709] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:21.616 19:25:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:22.552 19:25:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:31:22.811 19:25:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 465359 00:31:29.386 { 00:31:29.386 "results": [ 00:31:29.386 { 00:31:29.386 "job": "NVMe0n1", 00:31:29.386 "core_mask": "0x1", 00:31:29.386 "workload": "verify", 00:31:29.386 "status": "finished", 00:31:29.386 "verify_range": { 00:31:29.386 "start": 0, 00:31:29.386 "length": 16384 00:31:29.386 }, 00:31:29.386 "queue_depth": 128, 00:31:29.386 "io_size": 4096, 00:31:29.386 "runtime": 15.005367, 00:31:29.386 "iops": 14212.714690683673, 00:31:29.386 "mibps": 55.5184167604831, 00:31:29.386 "io_failed": 4684, 00:31:29.386 "io_timeout": 0, 00:31:29.386 "avg_latency_us": 8791.445284138177, 00:31:29.386 "min_latency_us": 329.3184, 00:31:29.386 "max_latency_us": 1020054.7328 00:31:29.386 } 00:31:29.386 ], 00:31:29.386 "core_count": 1 00:31:29.386 } 00:31:29.386 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 465347 00:31:29.386 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 465347 ']' 00:31:29.386 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 465347 00:31:29.386 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:31:29.386 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:29.386 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 465347 00:31:29.386 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:29.386 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:29.386 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 465347' 00:31:29.386 killing process with pid 465347 00:31:29.386 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 465347 00:31:29.386 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 465347 00:31:29.386 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:29.386 [2024-12-13 19:25:47.148310] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:31:29.386 [2024-12-13 19:25:47.148371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465347 ] 00:31:29.386 [2024-12-13 19:25:47.242709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.386 [2024-12-13 19:25:47.265048] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:29.386 Running I/O for 15 seconds... 00:31:29.386 17921.00 IOPS, 70.00 MiB/s [2024-12-13T18:26:03.764Z] 9728.00 IOPS, 38.00 MiB/s [2024-12-13T18:26:03.764Z] [2024-12-13 19:25:50.072500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.386 [2024-12-13 19:25:50.072536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.386 [2024-12-13 19:25:50.072554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.386 [2024-12-13 19:25:50.072564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.386 [2024-12-13 19:25:50.072575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.386 [2024-12-13 19:25:50.072584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.386 [2024-12-13 19:25:50.072595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.386 [2024-12-13 19:25:50.072604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.386 [2024-12-13 19:25:50.072615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.386 [2024-12-13 19:25:50.072624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.386 [2024-12-13 19:25:50.072634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.386 [2024-12-13 19:25:50.072643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.386 [2024-12-13 19:25:50.072653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.386 [2024-12-13 19:25:50.072662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.386 [2024-12-13 19:25:50.072672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.386 [2024-12-13 19:25:50.072681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.386 [2024-12-13 19:25:50.072691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.386 [2024-12-13 19:25:50.072700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.386 [2024-12-13 19:25:50.072710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.386 [2024-12-13 19:25:50.072719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.386 [2024-12-13 19:25:50.072729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.386 [2024-12-13 19:25:50.072743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.386 [2024-12-13 19:25:50.072754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.386 [2024-12-13 19:25:50.072763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.386 [2024-12-13 19:25:50.072774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.386 [2024-12-13 19:25:50.072782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.386 [2024-12-13 19:25:50.072792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.386 [2024-12-13 19:25:50.072801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.386 [2024-12-13 19:25:50.072811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.386 [2024-12-13 19:25:50.072820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.386 [2024-12-13 19:25:50.072830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.386 [2024-12-13 19:25:50.072839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.386 [2024-12-13 19:25:50.072849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.386 [2024-12-13 19:25:50.072858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.386 [2024-12-13 19:25:50.072870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.386 [2024-12-13 19:25:50.072879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.072890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.072899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.072909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.072918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.072928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.072938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.072948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.072957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.072967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.072978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.072989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.072997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.387 [2024-12-13 19:25:50.073636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.387 [2024-12-13 19:25:50.073646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.073655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.073665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.073674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.073684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.073694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.073705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.073714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.073724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.073733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.073743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.073752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.073762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.073771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.073781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.073790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.073800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.073809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.073820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.073829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.073839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.073848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.073858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.073866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.073877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.073886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.073896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.073905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.073915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.073925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.073935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.073944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.073955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.073963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.073974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.073982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.073992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.074001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.074012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.074021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.074031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.074040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.074053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.074062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.074072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.074081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.074091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.074100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.074111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.074120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.074130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.074139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.074149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.074158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.074170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.074179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.074189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.074198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.074208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.074216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.074226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.074235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.074245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.074254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.074264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.074273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.074283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.074292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.074302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.074311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.074321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.074330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.074340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.074349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.074359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.388 [2024-12-13 19:25:50.074368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.074379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x184e00 00:31:29.388 [2024-12-13 19:25:50.074388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.074400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x184e00 00:31:29.388 [2024-12-13 19:25:50.074409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.388 [2024-12-13 19:25:50.074420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.074987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x184e00 00:31:29.389 [2024-12-13 19:25:50.074995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18769 cdw0:3bb7c000 sqhd:1d3e p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.076783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.389 [2024-12-13 19:25:50.076796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.389 [2024-12-13 19:25:50.076804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24832 len:8 PRP1 0x0 PRP2 0x0 00:31:29.389 [2024-12-13 19:25:50.076813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:50.076857] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:31:29.389 [2024-12-13 19:25:50.076868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:31:29.389 [2024-12-13 19:25:50.079633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:29.389 [2024-12-13 19:25:50.093926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:31:29.389 [2024-12-13 19:25:50.136784] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:31:29.389 11541.33 IOPS, 45.08 MiB/s [2024-12-13T18:26:03.767Z] 13128.00 IOPS, 51.28 MiB/s [2024-12-13T18:26:03.767Z] 12452.80 IOPS, 48.64 MiB/s [2024-12-13T18:26:03.767Z] [2024-12-13 19:25:53.554833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:119192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.389 [2024-12-13 19:25:53.554869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.389 [2024-12-13 19:25:53.554885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:119200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.389 [2024-12-13 19:25:53.554895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.554906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:119208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.390 [2024-12-13 19:25:53.554915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.554926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:119216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.390 [2024-12-13 19:25:53.554935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.554946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:118712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x183700 00:31:29.390 [2024-12-13 19:25:53.554955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.554966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x183700 00:31:29.390 [2024-12-13 19:25:53.554975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.554985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:118728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x183700 00:31:29.390 [2024-12-13 19:25:53.554994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.555005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:118736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x183700 00:31:29.390 [2024-12-13 19:25:53.555014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.555024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:118744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x183700 00:31:29.390 [2024-12-13 19:25:53.555033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.555047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x183700 00:31:29.390 [2024-12-13 19:25:53.555056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.555067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:118760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x183700 00:31:29.390 [2024-12-13 19:25:53.555080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.555091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:118768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x183700 00:31:29.390 [2024-12-13 19:25:53.555100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.555110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:119224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.390 [2024-12-13 19:25:53.555119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.555130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:119232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.390 [2024-12-13 19:25:53.555139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.555149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:119240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.390 [2024-12-13 19:25:53.555158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.555168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:119248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.390 [2024-12-13 19:25:53.555177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.555187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:119256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.390 [2024-12-13 19:25:53.555196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.555208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:119264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.390 [2024-12-13 19:25:53.555216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.555226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:119272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.390 [2024-12-13 19:25:53.555235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.555245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:119280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.390 [2024-12-13 19:25:53.555254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.555265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:118776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x183700 00:31:29.390 [2024-12-13 19:25:53.555274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.555284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x183700 00:31:29.390 [2024-12-13 19:25:53.555293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.555310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:118792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x183700 00:31:29.390 [2024-12-13 19:25:53.555319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.555330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:118800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x183700 00:31:29.390 [2024-12-13 19:25:53.555339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.555349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:118808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x183700 00:31:29.390 [2024-12-13 19:25:53.555358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.555368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:118816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x183700 00:31:29.390 [2024-12-13 19:25:53.555377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.555387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x183700 00:31:29.390 [2024-12-13 19:25:53.555396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.555406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:118832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x183700 00:31:29.390 [2024-12-13 19:25:53.555415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.555426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:118840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x183700 00:31:29.390 [2024-12-13 19:25:53.555434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.555444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:118848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x183700 00:31:29.390 [2024-12-13 19:25:53.555454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.390 [2024-12-13 19:25:53.555464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:118856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x183700 00:31:29.390 [2024-12-13 19:25:53.555473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x183700 00:31:29.391 [2024-12-13 19:25:53.555491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:118872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x183700 00:31:29.391 [2024-12-13 19:25:53.555510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x183700 00:31:29.391 [2024-12-13 19:25:53.555531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:118888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x183700 00:31:29.391 [2024-12-13 19:25:53.555550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:118896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x183700 00:31:29.391 [2024-12-13 19:25:53.555569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:119288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.391 [2024-12-13 19:25:53.555588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:119296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.391 [2024-12-13 19:25:53.555607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:119304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.391 [2024-12-13 19:25:53.555625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:119312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.391 [2024-12-13 19:25:53.555644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:119320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.391 [2024-12-13 19:25:53.555663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:119328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.391 [2024-12-13 19:25:53.555682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:119336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.391 [2024-12-13 19:25:53.555700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:119344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.391 [2024-12-13 19:25:53.555719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x183700 00:31:29.391 [2024-12-13 19:25:53.555738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x183700 00:31:29.391 [2024-12-13 19:25:53.555758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:118920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x183700 00:31:29.391 [2024-12-13 19:25:53.555777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:118928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x183700 00:31:29.391 [2024-12-13 19:25:53.555797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:118936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x183700 00:31:29.391 [2024-12-13 19:25:53.555816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:118944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x183700 00:31:29.391 [2024-12-13 19:25:53.555835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x183700 00:31:29.391 [2024-12-13 19:25:53.555854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:118960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x183700 00:31:29.391 [2024-12-13 19:25:53.555873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:119352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.391 [2024-12-13 19:25:53.555891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.391 [2024-12-13 19:25:53.555910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:119368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.391 [2024-12-13 19:25:53.555929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:119376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.391 [2024-12-13 19:25:53.555947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:119384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.391 [2024-12-13 19:25:53.555968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:119392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.391 [2024-12-13 19:25:53.555987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.555997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:119400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.391 [2024-12-13 19:25:53.556005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.556015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:119408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.391 [2024-12-13 19:25:53.556024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.556034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:118968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x183700 00:31:29.391 [2024-12-13 19:25:53.556047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.556057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:118976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x183700 00:31:29.391 [2024-12-13 19:25:53.556069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.556079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x183700 00:31:29.391 [2024-12-13 19:25:53.556088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.556099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x183700 00:31:29.391 [2024-12-13 19:25:53.556108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.556118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x183700 00:31:29.391 [2024-12-13 19:25:53.556127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.556138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x183700 00:31:29.391 [2024-12-13 19:25:53.556146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.556157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x183700 00:31:29.391 [2024-12-13 19:25:53.556165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.556176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x183700 00:31:29.391 [2024-12-13 19:25:53.556185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.391 [2024-12-13 19:25:53.556196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.392 [2024-12-13 19:25:53.556205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:119424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.392 [2024-12-13 19:25:53.556224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:119432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.392 [2024-12-13 19:25:53.556243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:119440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.392 [2024-12-13 19:25:53.556262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:119448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.392 [2024-12-13 19:25:53.556281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.392 [2024-12-13 19:25:53.556300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:119464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.392 [2024-12-13 19:25:53.556318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:119472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.392 [2024-12-13 19:25:53.556337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x183700 00:31:29.392 [2024-12-13 19:25:53.556357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:119040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x183700 00:31:29.392 [2024-12-13 19:25:53.556376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:119048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x183700 00:31:29.392 [2024-12-13 19:25:53.556395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:119056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x183700 00:31:29.392 [2024-12-13 19:25:53.556414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x183700 00:31:29.392 [2024-12-13 19:25:53.556434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:119072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x183700 00:31:29.392 [2024-12-13 19:25:53.556453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x183700 00:31:29.392 [2024-12-13 19:25:53.556473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:119088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x183700 00:31:29.392 [2024-12-13 19:25:53.556492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:119480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.392 [2024-12-13 19:25:53.556510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:119488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.392 [2024-12-13 19:25:53.556529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:119496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.392 [2024-12-13 19:25:53.556548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:119504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.392 [2024-12-13 19:25:53.556566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:119512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.392 [2024-12-13 19:25:53.556585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:119520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.392 [2024-12-13 19:25:53.556604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:119528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.392 [2024-12-13 19:25:53.556623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:119536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.392 [2024-12-13 19:25:53.556641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:119096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x183700 00:31:29.392 [2024-12-13 19:25:53.556662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:119104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x183700 00:31:29.392 [2024-12-13 19:25:53.556681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:119112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x183700 00:31:29.392 [2024-12-13 19:25:53.556700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:119120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x183700 00:31:29.392 [2024-12-13 19:25:53.556719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x183700 00:31:29.392 [2024-12-13 19:25:53.556738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x183700 00:31:29.392 [2024-12-13 19:25:53.556757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x183700 00:31:29.392 [2024-12-13 19:25:53.556776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:119152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x183700 00:31:29.392 [2024-12-13 19:25:53.556795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:119544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.392 [2024-12-13 19:25:53.556814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:119552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.392 [2024-12-13 19:25:53.556833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:119560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.392 [2024-12-13 19:25:53.556852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:119568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.392 [2024-12-13 19:25:53.556871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:119576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.392 [2024-12-13 19:25:53.556891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:119584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.392 [2024-12-13 19:25:53.556909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.392 [2024-12-13 19:25:53.556928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.392 [2024-12-13 19:25:53.556938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:119600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:53.556946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:53.556957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:119608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:53.556966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:53.556977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:119616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:53.556986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:53.556996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:119624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:53.557004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:53.557015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:119632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:53.557023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:53.557033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:119640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:53.557044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:53.557055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:119648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:53.557063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:53.557074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:119656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:53.557082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:53.557093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:119664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:53.557103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:53.557113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:119672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:53.557121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:53.557132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:119680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:53.557141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:53.557151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:119688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:53.557159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:53.557169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:119696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:53.557178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:53.557188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:119704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:53.557197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:53.557207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:119712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:53.557216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:53.557226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:53.557234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:53.557244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:119728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:53.557253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:53.557264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:119160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x183700 00:31:29.393 [2024-12-13 19:25:53.557272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:53.557282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:119168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x183700 00:31:29.393 [2024-12-13 19:25:53.557291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:53.557302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:119176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x183700 00:31:29.393 [2024-12-13 19:25:53.557311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18771 cdw0:3bb7c000 sqhd:2c92 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:53.559120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.393 [2024-12-13 19:25:53.559136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.393 [2024-12-13 19:25:53.559144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119184 len:8 PRP1 0x0 PRP2 0x0 00:31:29.393 [2024-12-13 19:25:53.559154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:53.559194] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:31:29.393 [2024-12-13 19:25:53.559205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:31:29.393 [2024-12-13 19:25:53.561938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:31:29.393 [2024-12-13 19:25:53.576218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] CQ transport error -6 (No such device or address) on qpair id 0 00:31:29.393 [2024-12-13 19:25:53.615710] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:31:29.393 11513.50 IOPS, 44.97 MiB/s [2024-12-13T18:26:03.771Z] 12459.43 IOPS, 48.67 MiB/s [2024-12-13T18:26:03.771Z] 13166.50 IOPS, 51.43 MiB/s [2024-12-13T18:26:03.771Z] 13609.22 IOPS, 53.16 MiB/s [2024-12-13T18:26:03.771Z] [2024-12-13 19:25:57.959534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:88456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x184e00 00:31:29.393 [2024-12-13 19:25:57.959571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:57.959589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x184e00 00:31:29.393 [2024-12-13 19:25:57.959599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:57.959610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:88472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x184e00 00:31:29.393 [2024-12-13 19:25:57.959620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:57.959630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x184e00 00:31:29.393 [2024-12-13 19:25:57.959639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:57.959649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:88488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x184e00 00:31:29.393 [2024-12-13 19:25:57.959658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:57.959669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:57.959678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:57.959688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:57.959697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:57.959707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:57.959716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:57.959731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:57.959740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:57.959751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:57.959760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:57.959770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:57.959779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:57.959789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:57.959798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:57.959808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:57.959817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.393 [2024-12-13 19:25:57.959827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.393 [2024-12-13 19:25:57.959835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.959845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.959854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.959864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.959873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.959884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.959892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.959903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.959912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.959922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.959931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.959943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.959951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.959967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.959976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.959986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.959996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.960015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.960034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.960057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.960076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.960095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.960114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.960133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.960153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.960171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.960190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.960211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.960231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.960251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.960270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.960290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.960309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.960329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.960348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.960367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.960387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.960406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.960427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.394 [2024-12-13 19:25:57.960448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:88496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x184e00 00:31:29.394 [2024-12-13 19:25:57.960468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x184e00 00:31:29.394 [2024-12-13 19:25:57.960488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x184e00 00:31:29.394 [2024-12-13 19:25:57.960508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x184e00 00:31:29.394 [2024-12-13 19:25:57.960527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x184e00 00:31:29.394 [2024-12-13 19:25:57.960547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:88536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x184e00 00:31:29.394 [2024-12-13 19:25:57.960567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.394 [2024-12-13 19:25:57.960578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x184e00 00:31:29.395 [2024-12-13 19:25:57.960587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.960597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x184e00 00:31:29.395 [2024-12-13 19:25:57.960606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.960617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.395 [2024-12-13 19:25:57.960626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.960636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.395 [2024-12-13 19:25:57.960645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.960655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.395 [2024-12-13 19:25:57.960665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.960677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.395 [2024-12-13 19:25:57.960686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.960696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.395 [2024-12-13 19:25:57.960705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.960715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:88560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x184e00 00:31:29.395 [2024-12-13 19:25:57.960726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.960736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.395 [2024-12-13 19:25:57.960745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.960756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.395 [2024-12-13 19:25:57.960765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.960777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.395 [2024-12-13 19:25:57.960786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.960796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.395 [2024-12-13 19:25:57.960804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.960815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.395 [2024-12-13 19:25:57.960825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.960835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.395 [2024-12-13 19:25:57.960844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.960854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.395 [2024-12-13 19:25:57.960863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.960873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.395 [2024-12-13 19:25:57.960882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.960892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.395 [2024-12-13 19:25:57.960900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.960912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.395 [2024-12-13 19:25:57.960920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.960931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.395 [2024-12-13 19:25:57.960940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.960950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.395 [2024-12-13 19:25:57.960959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.960969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.395 [2024-12-13 19:25:57.960978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.960988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.395 [2024-12-13 19:25:57.960997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.961007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.395 [2024-12-13 19:25:57.961016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.961026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.395 [2024-12-13 19:25:57.961034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.961048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:88568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x184e00 00:31:29.395 [2024-12-13 19:25:57.961057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.961068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x184e00 00:31:29.395 [2024-12-13 19:25:57.961077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.961088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x184e00 00:31:29.395 [2024-12-13 19:25:57.961097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.961107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x184e00 00:31:29.395 [2024-12-13 19:25:57.961115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.961126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x184e00 00:31:29.395 [2024-12-13 19:25:57.961135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.961147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x184e00 00:31:29.395 [2024-12-13 19:25:57.961156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.961166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x184e00 00:31:29.395 [2024-12-13 19:25:57.961175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.961185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:88624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x184e00 00:31:29.395 [2024-12-13 19:25:57.961194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.961205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x184e00 00:31:29.395 [2024-12-13 19:25:57.961213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.961224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:88640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x184e00 00:31:29.395 [2024-12-13 19:25:57.961232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.395 [2024-12-13 19:25:57.961242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x184e00 00:31:29.396 [2024-12-13 19:25:57.961252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x184e00 00:31:29.396 [2024-12-13 19:25:57.961270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:88664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x184e00 00:31:29.396 [2024-12-13 19:25:57.961289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:88672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x184e00 00:31:29.396 [2024-12-13 19:25:57.961309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x184e00 00:31:29.396 [2024-12-13 19:25:57.961330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x184e00 00:31:29.396 [2024-12-13 19:25:57.961813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x184e00 00:31:29.396 [2024-12-13 19:25:57.961833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:88704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x184e00 00:31:29.396 [2024-12-13 19:25:57.961853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.396 [2024-12-13 19:25:57.961985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.396 [2024-12-13 19:25:57.961995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.397 [2024-12-13 19:25:57.962004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.397 [2024-12-13 19:25:57.962014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.397 [2024-12-13 19:25:57.962023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.397 [2024-12-13 19:25:57.962034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:29.397 [2024-12-13 19:25:57.962046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18773 cdw0:3bb7c000 sqhd:1808 p:1 m:0 dnr:0 00:31:29.397 [2024-12-13 19:25:57.963817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.397 [2024-12-13 19:25:57.963831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.397 [2024-12-13 19:25:57.963842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89472 len:8 PRP1 0x0 PRP2 0x0 00:31:29.397 [2024-12-13 19:25:57.963853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.397 [2024-12-13 19:25:57.963894] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:31:29.397 [2024-12-13 19:25:57.963905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:31:29.397 [2024-12-13 19:25:57.966652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:31:29.397 [2024-12-13 19:25:57.980572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] CQ transport error -6 (No such device or address) on qpair id 0 00:31:29.397 12248.30 IOPS, 47.84 MiB/s [2024-12-13T18:26:03.775Z] [2024-12-13 19:25:58.018307] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:31:29.397 12763.27 IOPS, 49.86 MiB/s [2024-12-13T18:26:03.775Z] 13217.58 IOPS, 51.63 MiB/s [2024-12-13T18:26:03.775Z] 13599.00 IOPS, 53.12 MiB/s [2024-12-13T18:26:03.775Z] 13927.07 IOPS, 54.40 MiB/s [2024-12-13T18:26:03.775Z] 14212.27 IOPS, 55.52 MiB/s 00:31:29.397 Latency(us) 00:31:29.397 [2024-12-13T18:26:03.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:29.397 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:29.397 Verification LBA range: start 0x0 length 0x4000 00:31:29.397 NVMe0n1 : 15.01 14212.71 55.52 312.15 0.00 8791.45 329.32 1020054.73 00:31:29.397 [2024-12-13T18:26:03.775Z] =================================================================================================================== 00:31:29.397 [2024-12-13T18:26:03.775Z] Total : 14212.71 55.52 312.15 0.00 8791.45 329.32 1020054.73 00:31:29.397 Received shutdown signal, test time was about 15.000000 seconds 00:31:29.397 00:31:29.397 Latency(us) 00:31:29.397 [2024-12-13T18:26:03.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:29.397 [2024-12-13T18:26:03.775Z] =================================================================================================================== 00:31:29.397 [2024-12-13T18:26:03.775Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:29.397 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:29.397 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:29.397 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:29.397 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=468011 00:31:29.397 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 468011 /var/tmp/bdevperf.sock 00:31:29.397 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:29.397 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 468011 ']' 00:31:29.397 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:29.397 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:29.397 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:29.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:29.397 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:29.397 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:29.397 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:29.397 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:31:29.397 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:31:29.397 [2024-12-13 19:26:03.715796] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:31:29.397 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:31:29.656 [2024-12-13 19:26:03.912445] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:31:29.656 19:26:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:29.915 NVMe0n1 00:31:29.915 19:26:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:30.174 00:31:30.174 19:26:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:30.434 00:31:30.434 19:26:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:30.434 19:26:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:30.693 19:26:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:30.951 19:26:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:34.239 19:26:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:34.240 19:26:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:34.240 19:26:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:34.240 19:26:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=468820 00:31:34.240 19:26:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 468820 00:31:35.177 { 00:31:35.177 "results": [ 00:31:35.177 { 00:31:35.177 "job": "NVMe0n1", 00:31:35.177 "core_mask": "0x1", 00:31:35.177 "workload": "verify", 00:31:35.177 "status": "finished", 00:31:35.177 "verify_range": { 00:31:35.177 "start": 0, 00:31:35.177 "length": 16384 00:31:35.177 }, 00:31:35.177 "queue_depth": 128, 00:31:35.177 "io_size": 4096, 00:31:35.177 "runtime": 1.005893, 00:31:35.177 "iops": 17815.016110063396, 00:31:35.177 "mibps": 69.58990667993514, 00:31:35.177 "io_failed": 0, 00:31:35.177 "io_timeout": 0, 00:31:35.177 "avg_latency_us": 7147.02921142857, 00:31:35.177 "min_latency_us": 2477.2608, 00:31:35.177 "max_latency_us": 14575.2064 00:31:35.177 } 00:31:35.177 ], 00:31:35.177 "core_count": 1 00:31:35.177 } 00:31:35.177 19:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:35.177 [2024-12-13 19:26:03.341290] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:31:35.177 [2024-12-13 19:26:03.341350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid468011 ] 00:31:35.177 [2024-12-13 19:26:03.433769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.177 [2024-12-13 19:26:03.452987] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:35.177 [2024-12-13 19:26:05.091605] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:31:35.177 [2024-12-13 19:26:05.092274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:31:35.177 [2024-12-13 19:26:05.092308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:31:35.177 [2024-12-13 19:26:05.110828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] CQ transport error -6 (No such device or address) on qpair id 0 00:31:35.177 [2024-12-13 19:26:05.127014] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:31:35.177 Running I/O for 1 seconds... 00:31:35.177 17792.00 IOPS, 69.50 MiB/s 00:31:35.177 Latency(us) 00:31:35.177 [2024-12-13T18:26:09.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:35.177 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:35.177 Verification LBA range: start 0x0 length 0x4000 00:31:35.177 NVMe0n1 : 1.01 17815.02 69.59 0.00 0.00 7147.03 2477.26 14575.21 00:31:35.177 [2024-12-13T18:26:09.555Z] =================================================================================================================== 00:31:35.177 [2024-12-13T18:26:09.555Z] Total : 17815.02 69.59 0.00 0.00 7147.03 2477.26 14575.21 00:31:35.177 19:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:35.177 19:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:35.436 19:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:35.695 19:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:35.695 19:26:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:35.695 19:26:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:35.953 19:26:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:39.241 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:39.241 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:39.241 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 468011 00:31:39.241 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 468011 ']' 00:31:39.241 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 468011 00:31:39.241 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:31:39.241 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:39.241 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 468011 00:31:39.241 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:39.241 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:39.241 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 468011' 00:31:39.241 killing process with pid 468011 00:31:39.241 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 468011 00:31:39.241 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 468011 00:31:39.500 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:39.500 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:39.501 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:39.501 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:39.501 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:39.501 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:39.501 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:31:39.501 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:31:39.501 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:31:39.501 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:31:39.501 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:39.501 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:31:39.760 rmmod nvme_rdma 00:31:39.760 rmmod nvme_fabrics 00:31:39.760 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:39.760 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:31:39.760 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:31:39.760 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 464965 ']' 00:31:39.760 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 464965 00:31:39.760 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 464965 ']' 00:31:39.760 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 464965 00:31:39.760 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:31:39.760 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:39.760 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 464965 00:31:39.760 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:39.760 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:39.760 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 464965' 00:31:39.760 killing process with pid 464965 00:31:39.760 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 464965 00:31:39.760 19:26:13 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 464965 00:31:40.019 19:26:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:40.019 19:26:14 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:31:40.019 00:31:40.019 real 0m36.349s 00:31:40.019 user 1m58.711s 00:31:40.019 sys 0m7.788s 00:31:40.019 19:26:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:40.019 19:26:14 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:40.019 ************************************ 00:31:40.019 END TEST nvmf_failover 00:31:40.019 ************************************ 00:31:40.019 19:26:14 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:31:40.019 19:26:14 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:40.019 19:26:14 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:40.019 19:26:14 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.019 ************************************ 00:31:40.019 START TEST nvmf_host_discovery 00:31:40.019 ************************************ 00:31:40.019 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:31:40.278 * Looking for test storage... 00:31:40.278 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:40.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.278 --rc genhtml_branch_coverage=1 00:31:40.278 --rc genhtml_function_coverage=1 00:31:40.278 --rc genhtml_legend=1 00:31:40.278 --rc geninfo_all_blocks=1 00:31:40.278 --rc geninfo_unexecuted_blocks=1 00:31:40.278 00:31:40.278 ' 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:40.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.278 --rc genhtml_branch_coverage=1 00:31:40.278 --rc genhtml_function_coverage=1 00:31:40.278 --rc genhtml_legend=1 00:31:40.278 --rc geninfo_all_blocks=1 00:31:40.278 --rc geninfo_unexecuted_blocks=1 00:31:40.278 00:31:40.278 ' 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:40.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.278 --rc genhtml_branch_coverage=1 00:31:40.278 --rc genhtml_function_coverage=1 00:31:40.278 --rc genhtml_legend=1 00:31:40.278 --rc geninfo_all_blocks=1 00:31:40.278 --rc geninfo_unexecuted_blocks=1 00:31:40.278 00:31:40.278 ' 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:40.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.278 --rc genhtml_branch_coverage=1 00:31:40.278 --rc genhtml_function_coverage=1 00:31:40.278 --rc genhtml_legend=1 00:31:40.278 --rc geninfo_all_blocks=1 00:31:40.278 --rc geninfo_unexecuted_blocks=1 00:31:40.278 00:31:40.278 ' 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.278 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:40.279 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:31:40.279 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:31:40.279 00:31:40.279 real 0m0.230s 00:31:40.279 user 0m0.144s 00:31:40.279 sys 0m0.103s 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:40.279 ************************************ 00:31:40.279 END TEST nvmf_host_discovery 00:31:40.279 ************************************ 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.279 ************************************ 00:31:40.279 START TEST nvmf_host_multipath_status 00:31:40.279 ************************************ 00:31:40.279 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:31:40.539 * Looking for test storage... 00:31:40.539 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:40.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.539 --rc genhtml_branch_coverage=1 00:31:40.539 --rc genhtml_function_coverage=1 00:31:40.539 --rc genhtml_legend=1 00:31:40.539 --rc geninfo_all_blocks=1 00:31:40.539 --rc geninfo_unexecuted_blocks=1 00:31:40.539 00:31:40.539 ' 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:40.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.539 --rc genhtml_branch_coverage=1 00:31:40.539 --rc genhtml_function_coverage=1 00:31:40.539 --rc genhtml_legend=1 00:31:40.539 --rc geninfo_all_blocks=1 00:31:40.539 --rc geninfo_unexecuted_blocks=1 00:31:40.539 00:31:40.539 ' 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:40.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.539 --rc genhtml_branch_coverage=1 00:31:40.539 --rc genhtml_function_coverage=1 00:31:40.539 --rc genhtml_legend=1 00:31:40.539 --rc geninfo_all_blocks=1 00:31:40.539 --rc geninfo_unexecuted_blocks=1 00:31:40.539 00:31:40.539 ' 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:40.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.539 --rc genhtml_branch_coverage=1 00:31:40.539 --rc genhtml_function_coverage=1 00:31:40.539 --rc genhtml_legend=1 00:31:40.539 --rc geninfo_all_blocks=1 00:31:40.539 --rc geninfo_unexecuted_blocks=1 00:31:40.539 00:31:40.539 ' 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.539 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:40.540 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:31:40.540 19:26:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:31:48.667 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:31:48.667 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:31:48.667 Found net devices under 0000:d9:00.0: mlx_0_0 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:31:48.667 Found net devices under 0000:d9:00.1: mlx_0_1 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # rdma_device_init 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@530 -- # allocate_nic_ips 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:31:48.667 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:31:48.668 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:48.668 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:31:48.668 altname enp217s0f0np0 00:31:48.668 altname ens818f0np0 00:31:48.668 inet 192.168.100.8/24 scope global mlx_0_0 00:31:48.668 valid_lft forever preferred_lft forever 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:31:48.668 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:48.668 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:31:48.668 altname enp217s0f1np1 00:31:48.668 altname ens818f1np1 00:31:48.668 inet 192.168.100.9/24 scope global mlx_0_1 00:31:48.668 valid_lft forever preferred_lft forever 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:48.668 19:26:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:31:48.668 192.168.100.9' 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:31:48.668 192.168.100.9' 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # head -n 1 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:31:48.668 192.168.100.9' 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # tail -n +2 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # head -n 1 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=473216 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 473216 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 473216 ']' 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:48.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:48.668 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:48.668 [2024-12-13 19:26:22.146363] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:31:48.668 [2024-12-13 19:26:22.146415] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:48.668 [2024-12-13 19:26:22.222481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:48.668 [2024-12-13 19:26:22.244483] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:48.669 [2024-12-13 19:26:22.244521] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:48.669 [2024-12-13 19:26:22.244530] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:48.669 [2024-12-13 19:26:22.244540] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:48.669 [2024-12-13 19:26:22.244547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:48.669 [2024-12-13 19:26:22.247063] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.669 [2024-12-13 19:26:22.247088] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.669 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:48.669 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:31:48.669 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:48.669 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:48.669 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:48.669 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:48.669 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=473216 00:31:48.669 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:31:48.669 [2024-12-13 19:26:22.580712] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1993da0/0x1998250) succeed. 00:31:48.669 [2024-12-13 19:26:22.589672] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19952a0/0x19d98f0) succeed. 00:31:48.669 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:48.669 Malloc0 00:31:48.669 19:26:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:48.927 19:26:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:48.927 19:26:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:49.186 [2024-12-13 19:26:23.414224] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:49.186 19:26:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:31:49.445 [2024-12-13 19:26:23.618622] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:31:49.445 19:26:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=473477 00:31:49.445 19:26:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:49.445 19:26:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:49.445 19:26:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 473477 /var/tmp/bdevperf.sock 00:31:49.445 19:26:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 473477 ']' 00:31:49.445 19:26:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:49.445 19:26:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:49.445 19:26:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:49.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:49.445 19:26:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:49.445 19:26:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:49.704 19:26:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:49.704 19:26:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:31:49.704 19:26:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:49.963 19:26:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:50.222 Nvme0n1 00:31:50.222 19:26:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:50.481 Nvme0n1 00:31:50.481 19:26:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:50.481 19:26:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:52.386 19:26:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:52.386 19:26:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:31:52.646 19:26:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:31:52.904 19:26:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:53.841 19:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:53.841 19:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:53.841 19:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.841 19:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:54.100 19:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.100 19:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:54.100 19:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.100 19:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:54.359 19:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:54.359 19:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:54.359 19:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.359 19:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:54.359 19:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.359 19:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:54.359 19:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.359 19:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:54.618 19:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.618 19:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:54.618 19:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.618 19:26:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:54.876 19:26:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.876 19:26:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:54.876 19:26:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.876 19:26:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:54.876 19:26:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.876 19:26:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:54.876 19:26:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:31:55.134 19:26:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:31:55.393 19:26:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:56.330 19:26:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:56.330 19:26:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:56.330 19:26:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.330 19:26:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:56.589 19:26:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:56.589 19:26:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:56.589 19:26:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.589 19:26:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:56.848 19:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.848 19:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:56.848 19:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.848 19:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:57.107 19:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:57.107 19:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:57.107 19:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.107 19:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:57.107 19:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:57.107 19:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:57.107 19:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:57.107 19:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.366 19:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:57.366 19:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:57.366 19:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.366 19:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:57.626 19:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:57.626 19:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:57.626 19:26:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:31:57.885 19:26:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:31:57.885 19:26:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:59.263 19:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:59.263 19:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:59.263 19:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.263 19:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:59.263 19:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.263 19:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:59.263 19:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.263 19:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:59.522 19:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:59.522 19:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:59.522 19:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.522 19:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:59.522 19:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.522 19:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:59.522 19:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.522 19:26:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:59.781 19:26:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.781 19:26:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:59.781 19:26:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.781 19:26:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:00.041 19:26:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:00.041 19:26:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:00.041 19:26:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:00.041 19:26:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:00.300 19:26:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:00.300 19:26:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:32:00.300 19:26:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:32:00.300 19:26:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:32:00.559 19:26:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:32:01.502 19:26:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:32:01.502 19:26:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:01.502 19:26:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.502 19:26:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:01.761 19:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:01.761 19:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:01.761 19:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.761 19:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:02.020 19:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:02.020 19:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:02.020 19:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.020 19:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:02.279 19:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.279 19:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:02.279 19:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.279 19:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:02.279 19:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.279 19:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:02.279 19:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.279 19:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:02.538 19:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.538 19:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:02.538 19:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.538 19:26:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:02.797 19:26:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:02.797 19:26:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:02.797 19:26:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:32:03.056 19:26:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:32:03.056 19:26:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:04.434 19:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:04.434 19:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:04.434 19:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:04.434 19:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.434 19:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:04.434 19:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:04.434 19:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.434 19:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:04.693 19:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:04.693 19:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:04.693 19:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.693 19:26:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:04.693 19:26:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:04.693 19:26:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:04.693 19:26:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.693 19:26:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:04.952 19:26:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:04.952 19:26:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:04.952 19:26:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.952 19:26:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:05.211 19:26:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:05.211 19:26:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:05.211 19:26:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.211 19:26:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:05.470 19:26:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:05.470 19:26:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:05.470 19:26:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:32:05.470 19:26:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:32:05.729 19:26:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:32:06.666 19:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:32:06.666 19:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:06.667 19:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:06.667 19:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:06.925 19:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:06.925 19:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:06.925 19:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:06.925 19:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:07.184 19:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.184 19:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:07.184 19:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.184 19:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:07.443 19:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.443 19:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:07.443 19:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.443 19:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:07.443 19:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.443 19:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:07.702 19:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.702 19:26:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:07.702 19:26:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:07.702 19:26:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:07.702 19:26:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.702 19:26:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:07.961 19:26:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.961 19:26:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:08.220 19:26:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:08.221 19:26:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:32:08.480 19:26:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:32:08.480 19:26:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:09.858 19:26:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:09.858 19:26:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:09.858 19:26:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:09.858 19:26:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.858 19:26:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:09.858 19:26:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:09.859 19:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.859 19:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:09.859 19:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:09.859 19:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:09.859 19:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.859 19:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:10.118 19:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.118 19:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:10.118 19:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.118 19:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:10.377 19:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.377 19:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:10.377 19:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.377 19:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:10.636 19:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.636 19:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:10.636 19:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.636 19:26:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:10.636 19:26:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.636 19:26:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:10.636 19:26:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:32:10.902 19:26:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:32:11.161 19:26:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:12.098 19:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:12.098 19:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:12.098 19:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:12.098 19:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:12.357 19:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:12.357 19:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:12.357 19:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:12.357 19:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:12.616 19:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:12.616 19:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:12.616 19:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:12.616 19:26:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:12.875 19:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:12.875 19:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:12.875 19:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:12.875 19:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:12.875 19:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:12.875 19:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:12.875 19:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:12.875 19:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:13.135 19:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:13.135 19:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:13.135 19:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:13.135 19:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:13.393 19:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:13.393 19:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:13.394 19:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:32:13.653 19:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:32:13.653 19:26:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:32:15.032 19:26:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:32:15.032 19:26:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:15.032 19:26:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:15.032 19:26:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:15.032 19:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:15.032 19:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:15.032 19:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:15.032 19:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:15.032 19:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:15.032 19:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:15.032 19:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:15.032 19:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:15.291 19:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:15.291 19:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:15.291 19:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:15.291 19:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:15.582 19:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:15.582 19:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:15.582 19:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:15.582 19:26:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:15.873 19:26:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:15.873 19:26:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:15.873 19:26:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:15.873 19:26:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:15.874 19:26:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:15.874 19:26:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:32:15.874 19:26:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:32:16.195 19:26:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:32:16.533 19:26:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:32:17.598 19:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:32:17.598 19:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:17.599 19:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.599 19:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:17.599 19:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.599 19:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:17.599 19:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.599 19:26:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:17.858 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:17.858 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:17.858 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.858 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:17.858 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:17.858 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:17.858 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:17.858 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:18.117 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:18.117 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:18.117 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:18.117 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:18.377 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:18.377 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:18.377 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:18.377 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:18.636 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:18.636 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 473477 00:32:18.637 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 473477 ']' 00:32:18.637 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 473477 00:32:18.637 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:32:18.637 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:18.637 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 473477 00:32:18.637 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:32:18.637 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:32:18.637 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 473477' 00:32:18.637 killing process with pid 473477 00:32:18.637 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 473477 00:32:18.637 19:26:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 473477 00:32:18.637 { 00:32:18.637 "results": [ 00:32:18.637 { 00:32:18.637 "job": "Nvme0n1", 00:32:18.637 "core_mask": "0x4", 00:32:18.637 "workload": "verify", 00:32:18.637 "status": "terminated", 00:32:18.637 "verify_range": { 00:32:18.637 "start": 0, 00:32:18.637 "length": 16384 00:32:18.637 }, 00:32:18.637 "queue_depth": 128, 00:32:18.637 "io_size": 4096, 00:32:18.637 "runtime": 28.082581, 00:32:18.637 "iops": 15917.340361272349, 00:32:18.637 "mibps": 62.17711078622011, 00:32:18.637 "io_failed": 0, 00:32:18.637 "io_timeout": 0, 00:32:18.637 "avg_latency_us": 8022.370106072484, 00:32:18.637 "min_latency_us": 53.6576, 00:32:18.637 "max_latency_us": 3019898.88 00:32:18.637 } 00:32:18.637 ], 00:32:18.637 "core_count": 1 00:32:18.637 } 00:32:18.899 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 473477 00:32:18.899 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:18.899 [2024-12-13 19:26:23.694320] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:32:18.899 [2024-12-13 19:26:23.694383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473477 ] 00:32:18.899 [2024-12-13 19:26:23.789515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.899 [2024-12-13 19:26:23.811469] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:32:18.899 Running I/O for 90 seconds... 00:32:18.900 18560.00 IOPS, 72.50 MiB/s [2024-12-13T18:26:53.278Z] 18688.00 IOPS, 73.00 MiB/s [2024-12-13T18:26:53.278Z] 18690.33 IOPS, 73.01 MiB/s [2024-12-13T18:26:53.278Z] 18681.75 IOPS, 72.98 MiB/s [2024-12-13T18:26:53.278Z] 18624.60 IOPS, 72.75 MiB/s [2024-12-13T18:26:53.278Z] 18624.50 IOPS, 72.75 MiB/s [2024-12-13T18:26:53.278Z] 18620.57 IOPS, 72.74 MiB/s [2024-12-13T18:26:53.278Z] 18609.38 IOPS, 72.69 MiB/s [2024-12-13T18:26:53.278Z] 18593.44 IOPS, 72.63 MiB/s [2024-12-13T18:26:53.278Z] 18580.00 IOPS, 72.58 MiB/s [2024-12-13T18:26:53.278Z] 18570.18 IOPS, 72.54 MiB/s [2024-12-13T18:26:53.278Z] 18550.25 IOPS, 72.46 MiB/s [2024-12-13T18:26:53.278Z] [2024-12-13 19:26:37.204098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x182900 00:32:18.900 [2024-12-13 19:26:37.204288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x182900 00:32:18.900 [2024-12-13 19:26:37.204308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x182900 00:32:18.900 [2024-12-13 19:26:37.204328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x182900 00:32:18.900 [2024-12-13 19:26:37.204354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x182900 00:32:18.900 [2024-12-13 19:26:37.204374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x182900 00:32:18.900 [2024-12-13 19:26:37.204394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x182900 00:32:18.900 [2024-12-13 19:26:37.204414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:18.900 [2024-12-13 19:26:37.204901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.900 [2024-12-13 19:26:37.204910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.204922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.204932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.204945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.204954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.204964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.204973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.204984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.204993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.901 [2024-12-13 19:26:37.205745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:18.901 [2024-12-13 19:26:37.205756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.902 [2024-12-13 19:26:37.205768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.205780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.902 [2024-12-13 19:26:37.205788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.205799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.902 [2024-12-13 19:26:37.205808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.205820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.205828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.205840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.205849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.205861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.205871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.205883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.205892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.205905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.205913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.205925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.205933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.205945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.205954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.205965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.205974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.205985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.205994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.206014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.206034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.206057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.206078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.206101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.206121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.206144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.206164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.206185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.206206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.206226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.206245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.206266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.206286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.206305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.206326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.206345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.206367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.206387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.206408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.902 [2024-12-13 19:26:37.206429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.206450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.206469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.206491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.206511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:18.902 [2024-12-13 19:26:37.206523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x182900 00:32:18.902 [2024-12-13 19:26:37.206531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:37.206542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:37.206551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:37.206562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:37.206571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:37.206582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:37.206591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:37.206604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:37.206613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:37.206624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:37.206633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:37.206644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:37.206653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:37.206664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:37.206673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:37.206684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:37.206693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:37.206704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:37.206713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:37.206725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:37.206734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:37.206745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:37.206755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:37.206767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:37.206775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:37.206786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:37.206795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:37.206806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:37.206815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:37.207060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:37.207072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:18.903 17762.77 IOPS, 69.39 MiB/s [2024-12-13T18:26:53.281Z] 16494.00 IOPS, 64.43 MiB/s [2024-12-13T18:26:53.281Z] 15394.40 IOPS, 60.13 MiB/s [2024-12-13T18:26:53.281Z] 15064.19 IOPS, 58.84 MiB/s [2024-12-13T18:26:53.281Z] 15266.18 IOPS, 59.63 MiB/s [2024-12-13T18:26:53.281Z] 15421.11 IOPS, 60.24 MiB/s [2024-12-13T18:26:53.281Z] 15407.21 IOPS, 60.18 MiB/s [2024-12-13T18:26:53.281Z] 15388.25 IOPS, 60.11 MiB/s [2024-12-13T18:26:53.281Z] 15475.24 IOPS, 60.45 MiB/s [2024-12-13T18:26:53.281Z] 15622.09 IOPS, 61.02 MiB/s [2024-12-13T18:26:53.281Z] 15753.83 IOPS, 61.54 MiB/s [2024-12-13T18:26:53.281Z] 15754.38 IOPS, 61.54 MiB/s [2024-12-13T18:26:53.281Z] 15721.72 IOPS, 61.41 MiB/s [2024-12-13T18:26:53.281Z] [2024-12-13 19:26:50.583867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:50.583905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:50.583936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:50.583947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:50.583960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:50.583970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:50.583982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:92952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:50.583990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:50.584002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:50.584011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:50.584494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.903 [2024-12-13 19:26:50.584505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:50.584517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:50.584526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:50.584538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.903 [2024-12-13 19:26:50.584547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:50.584559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:50.584568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:50.584579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:50.584594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:50.584606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.903 [2024-12-13 19:26:50.584615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:50.584627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:50.584635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:50.584647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:50.584656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:50.584667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:50.584676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:50.584688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:50.584697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:50.584708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:50.584717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:50.584728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:93536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.903 [2024-12-13 19:26:50.584737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:50.584749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x182900 00:32:18.903 [2024-12-13 19:26:50.584758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:50.584770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:93552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.903 [2024-12-13 19:26:50.584780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:50.584791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.903 [2024-12-13 19:26:50.584800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:18.903 [2024-12-13 19:26:50.584811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x182900 00:32:18.904 [2024-12-13 19:26:50.584820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.584838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x182900 00:32:18.904 [2024-12-13 19:26:50.584846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.584858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x182900 00:32:18.904 [2024-12-13 19:26:50.584867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.584879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x182900 00:32:18.904 [2024-12-13 19:26:50.584888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.584900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.904 [2024-12-13 19:26:50.584909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.584920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x182900 00:32:18.904 [2024-12-13 19:26:50.584929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.584941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x182900 00:32:18.904 [2024-12-13 19:26:50.584950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.584961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x182900 00:32:18.904 [2024-12-13 19:26:50.584970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.584981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.904 [2024-12-13 19:26:50.584991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.585002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:93648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.904 [2024-12-13 19:26:50.585011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.585023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x182900 00:32:18.904 [2024-12-13 19:26:50.585032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.585047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x182900 00:32:18.904 [2024-12-13 19:26:50.585057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.585068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x182900 00:32:18.904 [2024-12-13 19:26:50.585078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.585090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:93216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x182900 00:32:18.904 [2024-12-13 19:26:50.585099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.585110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:93664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.904 [2024-12-13 19:26:50.585119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.585130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.904 [2024-12-13 19:26:50.585139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.585151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x182900 00:32:18.904 [2024-12-13 19:26:50.585160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.585292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x182900 00:32:18.904 [2024-12-13 19:26:50.585303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.585315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x182900 00:32:18.904 [2024-12-13 19:26:50.585324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.585335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.904 [2024-12-13 19:26:50.585344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.585356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.904 [2024-12-13 19:26:50.585365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.585376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x182900 00:32:18.904 [2024-12-13 19:26:50.585385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.585396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.904 [2024-12-13 19:26:50.585405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.585417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.904 [2024-12-13 19:26:50.585426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.585438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.904 [2024-12-13 19:26:50.585447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.585459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.904 [2024-12-13 19:26:50.585468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.585479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:93768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.904 [2024-12-13 19:26:50.585488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.585500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x182900 00:32:18.904 [2024-12-13 19:26:50.585509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.585520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x182900 00:32:18.904 [2024-12-13 19:26:50.585529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.585541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x182900 00:32:18.904 [2024-12-13 19:26:50.585550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.585561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.904 [2024-12-13 19:26:50.585570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.585582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.904 [2024-12-13 19:26:50.585591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:18.904 [2024-12-13 19:26:50.585603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.905 [2024-12-13 19:26:50.585611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:18.905 [2024-12-13 19:26:50.585623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.905 [2024-12-13 19:26:50.585631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:18.905 [2024-12-13 19:26:50.585643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x182900 00:32:18.905 [2024-12-13 19:26:50.585652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:18.905 [2024-12-13 19:26:50.585663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x182900 00:32:18.905 [2024-12-13 19:26:50.585672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:18.905 [2024-12-13 19:26:50.585685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x182900 00:32:18.905 [2024-12-13 19:26:50.585694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:18.905 [2024-12-13 19:26:50.585706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x182900 00:32:18.905 [2024-12-13 19:26:50.585714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:18.905 [2024-12-13 19:26:50.585726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.905 [2024-12-13 19:26:50.585735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:18.905 [2024-12-13 19:26:50.585746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.905 [2024-12-13 19:26:50.585755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:18.905 [2024-12-13 19:26:50.585767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x182900 00:32:18.905 [2024-12-13 19:26:50.585775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:18.905 [2024-12-13 19:26:50.585787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x182900 00:32:18.905 [2024-12-13 19:26:50.585796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:18.905 [2024-12-13 19:26:50.585808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:93888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.905 [2024-12-13 19:26:50.585816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:18.905 [2024-12-13 19:26:50.585828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:18.905 [2024-12-13 19:26:50.585836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:18.905 15713.85 IOPS, 61.38 MiB/s [2024-12-13T18:26:53.283Z] 15819.04 IOPS, 61.79 MiB/s [2024-12-13T18:26:53.283Z] 15913.18 IOPS, 62.16 MiB/s [2024-12-13T18:26:53.283Z] Received shutdown signal, test time was about 28.083275 seconds 00:32:18.905 00:32:18.905 Latency(us) 00:32:18.905 [2024-12-13T18:26:53.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:18.905 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:18.905 Verification LBA range: start 0x0 length 0x4000 00:32:18.905 Nvme0n1 : 28.08 15917.34 62.18 0.00 0.00 8022.37 53.66 3019898.88 00:32:18.905 [2024-12-13T18:26:53.283Z] =================================================================================================================== 00:32:18.905 [2024-12-13T18:26:53.283Z] Total : 15917.34 62.18 0.00 0.00 8022.37 53.66 3019898.88 00:32:18.905 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:18.905 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:32:18.905 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:18.905 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:32:18.905 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:18.905 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:32:18.905 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:32:18.905 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:32:18.905 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:32:18.905 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:18.905 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:32:18.905 rmmod nvme_rdma 00:32:19.164 rmmod nvme_fabrics 00:32:19.164 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:19.164 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:32:19.164 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:32:19.164 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 473216 ']' 00:32:19.164 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 473216 00:32:19.164 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 473216 ']' 00:32:19.164 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 473216 00:32:19.164 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:32:19.164 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:19.164 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 473216 00:32:19.165 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:19.165 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:19.165 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 473216' 00:32:19.165 killing process with pid 473216 00:32:19.165 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 473216 00:32:19.165 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 473216 00:32:19.424 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:19.424 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:32:19.424 00:32:19.424 real 0m38.980s 00:32:19.424 user 1m50.013s 00:32:19.424 sys 0m9.496s 00:32:19.424 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:19.424 19:26:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:19.424 ************************************ 00:32:19.424 END TEST nvmf_host_multipath_status 00:32:19.424 ************************************ 00:32:19.424 19:26:53 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:32:19.424 19:26:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:19.424 19:26:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:19.424 19:26:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.424 ************************************ 00:32:19.424 START TEST nvmf_discovery_remove_ifc 00:32:19.424 ************************************ 00:32:19.424 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:32:19.424 * Looking for test storage... 00:32:19.684 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:19.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.684 --rc genhtml_branch_coverage=1 00:32:19.684 --rc genhtml_function_coverage=1 00:32:19.684 --rc genhtml_legend=1 00:32:19.684 --rc geninfo_all_blocks=1 00:32:19.684 --rc geninfo_unexecuted_blocks=1 00:32:19.684 00:32:19.684 ' 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:19.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.684 --rc genhtml_branch_coverage=1 00:32:19.684 --rc genhtml_function_coverage=1 00:32:19.684 --rc genhtml_legend=1 00:32:19.684 --rc geninfo_all_blocks=1 00:32:19.684 --rc geninfo_unexecuted_blocks=1 00:32:19.684 00:32:19.684 ' 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:19.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.684 --rc genhtml_branch_coverage=1 00:32:19.684 --rc genhtml_function_coverage=1 00:32:19.684 --rc genhtml_legend=1 00:32:19.684 --rc geninfo_all_blocks=1 00:32:19.684 --rc geninfo_unexecuted_blocks=1 00:32:19.684 00:32:19.684 ' 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:19.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.684 --rc genhtml_branch_coverage=1 00:32:19.684 --rc genhtml_function_coverage=1 00:32:19.684 --rc genhtml_legend=1 00:32:19.684 --rc geninfo_all_blocks=1 00:32:19.684 --rc geninfo_unexecuted_blocks=1 00:32:19.684 00:32:19.684 ' 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:19.684 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:19.685 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:32:19.685 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:32:19.685 00:32:19.685 real 0m0.239s 00:32:19.685 user 0m0.124s 00:32:19.685 sys 0m0.132s 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:19.685 ************************************ 00:32:19.685 END TEST nvmf_discovery_remove_ifc 00:32:19.685 ************************************ 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:19.685 19:26:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.685 ************************************ 00:32:19.685 START TEST nvmf_identify_kernel_target 00:32:19.685 ************************************ 00:32:19.685 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:32:19.945 * Looking for test storage... 00:32:19.945 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:32:19.945 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:19.945 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:32:19.945 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:19.945 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:19.945 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:19.945 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:19.945 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:19.945 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:19.945 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:19.945 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:19.945 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:19.945 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:19.945 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:19.945 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:19.945 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:19.945 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:32:19.945 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:32:19.945 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:19.945 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:19.945 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:32:19.945 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:32:19.945 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:19.945 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:32:19.945 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:19.945 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:19.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.946 --rc genhtml_branch_coverage=1 00:32:19.946 --rc genhtml_function_coverage=1 00:32:19.946 --rc genhtml_legend=1 00:32:19.946 --rc geninfo_all_blocks=1 00:32:19.946 --rc geninfo_unexecuted_blocks=1 00:32:19.946 00:32:19.946 ' 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:19.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.946 --rc genhtml_branch_coverage=1 00:32:19.946 --rc genhtml_function_coverage=1 00:32:19.946 --rc genhtml_legend=1 00:32:19.946 --rc geninfo_all_blocks=1 00:32:19.946 --rc geninfo_unexecuted_blocks=1 00:32:19.946 00:32:19.946 ' 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:19.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.946 --rc genhtml_branch_coverage=1 00:32:19.946 --rc genhtml_function_coverage=1 00:32:19.946 --rc genhtml_legend=1 00:32:19.946 --rc geninfo_all_blocks=1 00:32:19.946 --rc geninfo_unexecuted_blocks=1 00:32:19.946 00:32:19.946 ' 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:19.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.946 --rc genhtml_branch_coverage=1 00:32:19.946 --rc genhtml_function_coverage=1 00:32:19.946 --rc genhtml_legend=1 00:32:19.946 --rc geninfo_all_blocks=1 00:32:19.946 --rc geninfo_unexecuted_blocks=1 00:32:19.946 00:32:19.946 ' 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:19.946 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:19.946 19:26:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:28.076 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:28.076 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:28.076 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:28.076 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:28.076 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:28.076 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:28.076 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:28.076 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:28.076 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:28.076 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:32:28.076 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:28.076 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:32:28.076 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:28.076 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:32:28.076 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:28.076 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:28.076 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:28.076 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:32:28.077 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:32:28.077 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:32:28.077 Found net devices under 0000:d9:00.0: mlx_0_0 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:32:28.077 Found net devices under 0000:d9:00.1: mlx_0_1 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # rdma_device_init 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:32:28.077 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:28.077 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:32:28.077 altname enp217s0f0np0 00:32:28.077 altname ens818f0np0 00:32:28.077 inet 192.168.100.8/24 scope global mlx_0_0 00:32:28.077 valid_lft forever preferred_lft forever 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:32:28.077 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:32:28.078 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:28.078 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:32:28.078 altname enp217s0f1np1 00:32:28.078 altname ens818f1np1 00:32:28.078 inet 192.168.100.9/24 scope global mlx_0_1 00:32:28.078 valid_lft forever preferred_lft forever 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:32:28.078 192.168.100.9' 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:32:28.078 192.168.100.9' 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # head -n 1 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:32:28.078 192.168.100.9' 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # tail -n +2 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # head -n 1 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:28.078 19:27:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:32:30.618 Waiting for block devices as requested 00:32:30.618 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:30.618 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:30.878 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:30.878 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:30.878 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:31.137 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:31.137 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:31.137 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:31.398 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:31.398 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:31.398 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:31.656 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:31.656 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:31.656 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:31.915 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:31.915 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:32.175 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:32:32.175 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:32:32.175 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:32.175 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:32:32.175 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:32:32.175 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:32.175 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:32:32.175 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:32:32.175 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:32:32.175 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:32.175 No valid GPT data, bailing 00:32:32.175 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:32.175 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:32:32.175 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:32:32.175 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:32:32.175 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:32:32.175 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:32.175 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:32.175 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:32.435 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:32.435 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:32:32.435 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:32:32.435 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:32:32.435 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:32:32.435 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo rdma 00:32:32.435 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:32:32.435 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:32:32.435 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:32.435 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:32:32.435 00:32:32.435 Discovery Log Number of Records 2, Generation counter 2 00:32:32.435 =====Discovery Log Entry 0====== 00:32:32.435 trtype: rdma 00:32:32.435 adrfam: ipv4 00:32:32.435 subtype: current discovery subsystem 00:32:32.435 treq: not specified, sq flow control disable supported 00:32:32.435 portid: 1 00:32:32.435 trsvcid: 4420 00:32:32.435 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:32.435 traddr: 192.168.100.8 00:32:32.435 eflags: none 00:32:32.435 rdma_prtype: not specified 00:32:32.435 rdma_qptype: connected 00:32:32.435 rdma_cms: rdma-cm 00:32:32.435 rdma_pkey: 0x0000 00:32:32.435 =====Discovery Log Entry 1====== 00:32:32.435 trtype: rdma 00:32:32.435 adrfam: ipv4 00:32:32.435 subtype: nvme subsystem 00:32:32.435 treq: not specified, sq flow control disable supported 00:32:32.435 portid: 1 00:32:32.435 trsvcid: 4420 00:32:32.435 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:32.435 traddr: 192.168.100.8 00:32:32.435 eflags: none 00:32:32.435 rdma_prtype: not specified 00:32:32.435 rdma_qptype: connected 00:32:32.435 rdma_cms: rdma-cm 00:32:32.435 rdma_pkey: 0x0000 00:32:32.435 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:32:32.435 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:32.696 ===================================================== 00:32:32.696 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:32.696 ===================================================== 00:32:32.696 Controller Capabilities/Features 00:32:32.696 ================================ 00:32:32.696 Vendor ID: 0000 00:32:32.696 Subsystem Vendor ID: 0000 00:32:32.696 Serial Number: 786578b9552c35865e5c 00:32:32.696 Model Number: Linux 00:32:32.696 Firmware Version: 6.8.9-20 00:32:32.696 Recommended Arb Burst: 0 00:32:32.696 IEEE OUI Identifier: 00 00 00 00:32:32.696 Multi-path I/O 00:32:32.696 May have multiple subsystem ports: No 00:32:32.696 May have multiple controllers: No 00:32:32.696 Associated with SR-IOV VF: No 00:32:32.696 Max Data Transfer Size: Unlimited 00:32:32.696 Max Number of Namespaces: 0 00:32:32.696 Max Number of I/O Queues: 1024 00:32:32.696 NVMe Specification Version (VS): 1.3 00:32:32.696 NVMe Specification Version (Identify): 1.3 00:32:32.696 Maximum Queue Entries: 128 00:32:32.696 Contiguous Queues Required: No 00:32:32.696 Arbitration Mechanisms Supported 00:32:32.696 Weighted Round Robin: Not Supported 00:32:32.696 Vendor Specific: Not Supported 00:32:32.696 Reset Timeout: 7500 ms 00:32:32.696 Doorbell Stride: 4 bytes 00:32:32.696 NVM Subsystem Reset: Not Supported 00:32:32.696 Command Sets Supported 00:32:32.696 NVM Command Set: Supported 00:32:32.696 Boot Partition: Not Supported 00:32:32.696 Memory Page Size Minimum: 4096 bytes 00:32:32.696 Memory Page Size Maximum: 4096 bytes 00:32:32.696 Persistent Memory Region: Not Supported 00:32:32.696 Optional Asynchronous Events Supported 00:32:32.696 Namespace Attribute Notices: Not Supported 00:32:32.696 Firmware Activation Notices: Not Supported 00:32:32.696 ANA Change Notices: Not Supported 00:32:32.696 PLE Aggregate Log Change Notices: Not Supported 00:32:32.696 LBA Status Info Alert Notices: Not Supported 00:32:32.696 EGE Aggregate Log Change Notices: Not Supported 00:32:32.696 Normal NVM Subsystem Shutdown event: Not Supported 00:32:32.696 Zone Descriptor Change Notices: Not Supported 00:32:32.696 Discovery Log Change Notices: Supported 00:32:32.696 Controller Attributes 00:32:32.696 128-bit Host Identifier: Not Supported 00:32:32.696 Non-Operational Permissive Mode: Not Supported 00:32:32.696 NVM Sets: Not Supported 00:32:32.696 Read Recovery Levels: Not Supported 00:32:32.696 Endurance Groups: Not Supported 00:32:32.696 Predictable Latency Mode: Not Supported 00:32:32.696 Traffic Based Keep ALive: Not Supported 00:32:32.696 Namespace Granularity: Not Supported 00:32:32.696 SQ Associations: Not Supported 00:32:32.696 UUID List: Not Supported 00:32:32.696 Multi-Domain Subsystem: Not Supported 00:32:32.696 Fixed Capacity Management: Not Supported 00:32:32.696 Variable Capacity Management: Not Supported 00:32:32.696 Delete Endurance Group: Not Supported 00:32:32.696 Delete NVM Set: Not Supported 00:32:32.696 Extended LBA Formats Supported: Not Supported 00:32:32.696 Flexible Data Placement Supported: Not Supported 00:32:32.696 00:32:32.696 Controller Memory Buffer Support 00:32:32.696 ================================ 00:32:32.696 Supported: No 00:32:32.696 00:32:32.696 Persistent Memory Region Support 00:32:32.696 ================================ 00:32:32.696 Supported: No 00:32:32.696 00:32:32.696 Admin Command Set Attributes 00:32:32.696 ============================ 00:32:32.696 Security Send/Receive: Not Supported 00:32:32.696 Format NVM: Not Supported 00:32:32.696 Firmware Activate/Download: Not Supported 00:32:32.696 Namespace Management: Not Supported 00:32:32.696 Device Self-Test: Not Supported 00:32:32.696 Directives: Not Supported 00:32:32.696 NVMe-MI: Not Supported 00:32:32.696 Virtualization Management: Not Supported 00:32:32.696 Doorbell Buffer Config: Not Supported 00:32:32.696 Get LBA Status Capability: Not Supported 00:32:32.696 Command & Feature Lockdown Capability: Not Supported 00:32:32.696 Abort Command Limit: 1 00:32:32.696 Async Event Request Limit: 1 00:32:32.696 Number of Firmware Slots: N/A 00:32:32.696 Firmware Slot 1 Read-Only: N/A 00:32:32.696 Firmware Activation Without Reset: N/A 00:32:32.696 Multiple Update Detection Support: N/A 00:32:32.696 Firmware Update Granularity: No Information Provided 00:32:32.696 Per-Namespace SMART Log: No 00:32:32.696 Asymmetric Namespace Access Log Page: Not Supported 00:32:32.696 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:32.696 Command Effects Log Page: Not Supported 00:32:32.696 Get Log Page Extended Data: Supported 00:32:32.696 Telemetry Log Pages: Not Supported 00:32:32.696 Persistent Event Log Pages: Not Supported 00:32:32.696 Supported Log Pages Log Page: May Support 00:32:32.696 Commands Supported & Effects Log Page: Not Supported 00:32:32.696 Feature Identifiers & Effects Log Page:May Support 00:32:32.696 NVMe-MI Commands & Effects Log Page: May Support 00:32:32.696 Data Area 4 for Telemetry Log: Not Supported 00:32:32.696 Error Log Page Entries Supported: 1 00:32:32.696 Keep Alive: Not Supported 00:32:32.696 00:32:32.696 NVM Command Set Attributes 00:32:32.696 ========================== 00:32:32.696 Submission Queue Entry Size 00:32:32.696 Max: 1 00:32:32.696 Min: 1 00:32:32.696 Completion Queue Entry Size 00:32:32.696 Max: 1 00:32:32.696 Min: 1 00:32:32.696 Number of Namespaces: 0 00:32:32.696 Compare Command: Not Supported 00:32:32.696 Write Uncorrectable Command: Not Supported 00:32:32.696 Dataset Management Command: Not Supported 00:32:32.696 Write Zeroes Command: Not Supported 00:32:32.696 Set Features Save Field: Not Supported 00:32:32.696 Reservations: Not Supported 00:32:32.696 Timestamp: Not Supported 00:32:32.696 Copy: Not Supported 00:32:32.696 Volatile Write Cache: Not Present 00:32:32.696 Atomic Write Unit (Normal): 1 00:32:32.696 Atomic Write Unit (PFail): 1 00:32:32.696 Atomic Compare & Write Unit: 1 00:32:32.696 Fused Compare & Write: Not Supported 00:32:32.696 Scatter-Gather List 00:32:32.696 SGL Command Set: Supported 00:32:32.696 SGL Keyed: Supported 00:32:32.696 SGL Bit Bucket Descriptor: Not Supported 00:32:32.696 SGL Metadata Pointer: Not Supported 00:32:32.696 Oversized SGL: Not Supported 00:32:32.696 SGL Metadata Address: Not Supported 00:32:32.696 SGL Offset: Supported 00:32:32.696 Transport SGL Data Block: Not Supported 00:32:32.696 Replay Protected Memory Block: Not Supported 00:32:32.696 00:32:32.696 Firmware Slot Information 00:32:32.696 ========================= 00:32:32.696 Active slot: 0 00:32:32.696 00:32:32.696 00:32:32.696 Error Log 00:32:32.696 ========= 00:32:32.696 00:32:32.696 Active Namespaces 00:32:32.696 ================= 00:32:32.696 Discovery Log Page 00:32:32.696 ================== 00:32:32.696 Generation Counter: 2 00:32:32.696 Number of Records: 2 00:32:32.696 Record Format: 0 00:32:32.696 00:32:32.696 Discovery Log Entry 0 00:32:32.696 ---------------------- 00:32:32.696 Transport Type: 1 (RDMA) 00:32:32.696 Address Family: 1 (IPv4) 00:32:32.696 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:32.696 Entry Flags: 00:32:32.696 Duplicate Returned Information: 0 00:32:32.696 Explicit Persistent Connection Support for Discovery: 0 00:32:32.696 Transport Requirements: 00:32:32.696 Secure Channel: Not Specified 00:32:32.696 Port ID: 1 (0x0001) 00:32:32.696 Controller ID: 65535 (0xffff) 00:32:32.696 Admin Max SQ Size: 32 00:32:32.696 Transport Service Identifier: 4420 00:32:32.696 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:32.696 Transport Address: 192.168.100.8 00:32:32.696 Transport Specific Address Subtype - RDMA 00:32:32.696 RDMA QP Service Type: 1 (Reliable Connected) 00:32:32.696 RDMA Provider Type: 1 (No provider specified) 00:32:32.696 RDMA CM Service: 1 (RDMA_CM) 00:32:32.696 Discovery Log Entry 1 00:32:32.697 ---------------------- 00:32:32.697 Transport Type: 1 (RDMA) 00:32:32.697 Address Family: 1 (IPv4) 00:32:32.697 Subsystem Type: 2 (NVM Subsystem) 00:32:32.697 Entry Flags: 00:32:32.697 Duplicate Returned Information: 0 00:32:32.697 Explicit Persistent Connection Support for Discovery: 0 00:32:32.697 Transport Requirements: 00:32:32.697 Secure Channel: Not Specified 00:32:32.697 Port ID: 1 (0x0001) 00:32:32.697 Controller ID: 65535 (0xffff) 00:32:32.697 Admin Max SQ Size: 32 00:32:32.697 Transport Service Identifier: 4420 00:32:32.697 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:32.697 Transport Address: 192.168.100.8 00:32:32.697 Transport Specific Address Subtype - RDMA 00:32:32.697 RDMA QP Service Type: 1 (Reliable Connected) 00:32:32.697 RDMA Provider Type: 1 (No provider specified) 00:32:32.697 RDMA CM Service: 1 (RDMA_CM) 00:32:32.697 19:27:06 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:32.697 get_feature(0x01) failed 00:32:32.697 get_feature(0x02) failed 00:32:32.697 get_feature(0x04) failed 00:32:32.697 ===================================================== 00:32:32.697 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:32:32.697 ===================================================== 00:32:32.697 Controller Capabilities/Features 00:32:32.697 ================================ 00:32:32.697 Vendor ID: 0000 00:32:32.697 Subsystem Vendor ID: 0000 00:32:32.697 Serial Number: 0cbcbc4bfa8201f03cc3 00:32:32.697 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:32.697 Firmware Version: 6.8.9-20 00:32:32.697 Recommended Arb Burst: 6 00:32:32.697 IEEE OUI Identifier: 00 00 00 00:32:32.697 Multi-path I/O 00:32:32.697 May have multiple subsystem ports: Yes 00:32:32.697 May have multiple controllers: Yes 00:32:32.697 Associated with SR-IOV VF: No 00:32:32.697 Max Data Transfer Size: 1048576 00:32:32.697 Max Number of Namespaces: 1024 00:32:32.697 Max Number of I/O Queues: 128 00:32:32.697 NVMe Specification Version (VS): 1.3 00:32:32.697 NVMe Specification Version (Identify): 1.3 00:32:32.697 Maximum Queue Entries: 128 00:32:32.697 Contiguous Queues Required: No 00:32:32.697 Arbitration Mechanisms Supported 00:32:32.697 Weighted Round Robin: Not Supported 00:32:32.697 Vendor Specific: Not Supported 00:32:32.697 Reset Timeout: 7500 ms 00:32:32.697 Doorbell Stride: 4 bytes 00:32:32.697 NVM Subsystem Reset: Not Supported 00:32:32.697 Command Sets Supported 00:32:32.697 NVM Command Set: Supported 00:32:32.697 Boot Partition: Not Supported 00:32:32.697 Memory Page Size Minimum: 4096 bytes 00:32:32.697 Memory Page Size Maximum: 4096 bytes 00:32:32.697 Persistent Memory Region: Not Supported 00:32:32.697 Optional Asynchronous Events Supported 00:32:32.697 Namespace Attribute Notices: Supported 00:32:32.697 Firmware Activation Notices: Not Supported 00:32:32.697 ANA Change Notices: Supported 00:32:32.697 PLE Aggregate Log Change Notices: Not Supported 00:32:32.697 LBA Status Info Alert Notices: Not Supported 00:32:32.697 EGE Aggregate Log Change Notices: Not Supported 00:32:32.697 Normal NVM Subsystem Shutdown event: Not Supported 00:32:32.697 Zone Descriptor Change Notices: Not Supported 00:32:32.697 Discovery Log Change Notices: Not Supported 00:32:32.697 Controller Attributes 00:32:32.697 128-bit Host Identifier: Supported 00:32:32.697 Non-Operational Permissive Mode: Not Supported 00:32:32.697 NVM Sets: Not Supported 00:32:32.697 Read Recovery Levels: Not Supported 00:32:32.697 Endurance Groups: Not Supported 00:32:32.697 Predictable Latency Mode: Not Supported 00:32:32.697 Traffic Based Keep ALive: Supported 00:32:32.697 Namespace Granularity: Not Supported 00:32:32.697 SQ Associations: Not Supported 00:32:32.697 UUID List: Not Supported 00:32:32.697 Multi-Domain Subsystem: Not Supported 00:32:32.697 Fixed Capacity Management: Not Supported 00:32:32.697 Variable Capacity Management: Not Supported 00:32:32.697 Delete Endurance Group: Not Supported 00:32:32.697 Delete NVM Set: Not Supported 00:32:32.697 Extended LBA Formats Supported: Not Supported 00:32:32.697 Flexible Data Placement Supported: Not Supported 00:32:32.697 00:32:32.697 Controller Memory Buffer Support 00:32:32.697 ================================ 00:32:32.697 Supported: No 00:32:32.697 00:32:32.697 Persistent Memory Region Support 00:32:32.697 ================================ 00:32:32.697 Supported: No 00:32:32.697 00:32:32.697 Admin Command Set Attributes 00:32:32.697 ============================ 00:32:32.697 Security Send/Receive: Not Supported 00:32:32.697 Format NVM: Not Supported 00:32:32.697 Firmware Activate/Download: Not Supported 00:32:32.697 Namespace Management: Not Supported 00:32:32.697 Device Self-Test: Not Supported 00:32:32.697 Directives: Not Supported 00:32:32.697 NVMe-MI: Not Supported 00:32:32.697 Virtualization Management: Not Supported 00:32:32.697 Doorbell Buffer Config: Not Supported 00:32:32.697 Get LBA Status Capability: Not Supported 00:32:32.697 Command & Feature Lockdown Capability: Not Supported 00:32:32.697 Abort Command Limit: 4 00:32:32.697 Async Event Request Limit: 4 00:32:32.697 Number of Firmware Slots: N/A 00:32:32.697 Firmware Slot 1 Read-Only: N/A 00:32:32.697 Firmware Activation Without Reset: N/A 00:32:32.697 Multiple Update Detection Support: N/A 00:32:32.697 Firmware Update Granularity: No Information Provided 00:32:32.697 Per-Namespace SMART Log: Yes 00:32:32.697 Asymmetric Namespace Access Log Page: Supported 00:32:32.697 ANA Transition Time : 10 sec 00:32:32.697 00:32:32.697 Asymmetric Namespace Access Capabilities 00:32:32.697 ANA Optimized State : Supported 00:32:32.697 ANA Non-Optimized State : Supported 00:32:32.697 ANA Inaccessible State : Supported 00:32:32.697 ANA Persistent Loss State : Supported 00:32:32.697 ANA Change State : Supported 00:32:32.697 ANAGRPID is not changed : No 00:32:32.697 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:32.697 00:32:32.697 ANA Group Identifier Maximum : 128 00:32:32.697 Number of ANA Group Identifiers : 128 00:32:32.697 Max Number of Allowed Namespaces : 1024 00:32:32.697 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:32.697 Command Effects Log Page: Supported 00:32:32.697 Get Log Page Extended Data: Supported 00:32:32.697 Telemetry Log Pages: Not Supported 00:32:32.697 Persistent Event Log Pages: Not Supported 00:32:32.697 Supported Log Pages Log Page: May Support 00:32:32.697 Commands Supported & Effects Log Page: Not Supported 00:32:32.697 Feature Identifiers & Effects Log Page:May Support 00:32:32.697 NVMe-MI Commands & Effects Log Page: May Support 00:32:32.697 Data Area 4 for Telemetry Log: Not Supported 00:32:32.697 Error Log Page Entries Supported: 128 00:32:32.697 Keep Alive: Supported 00:32:32.697 Keep Alive Granularity: 1000 ms 00:32:32.697 00:32:32.697 NVM Command Set Attributes 00:32:32.697 ========================== 00:32:32.697 Submission Queue Entry Size 00:32:32.697 Max: 64 00:32:32.697 Min: 64 00:32:32.697 Completion Queue Entry Size 00:32:32.697 Max: 16 00:32:32.697 Min: 16 00:32:32.697 Number of Namespaces: 1024 00:32:32.697 Compare Command: Not Supported 00:32:32.697 Write Uncorrectable Command: Not Supported 00:32:32.697 Dataset Management Command: Supported 00:32:32.697 Write Zeroes Command: Supported 00:32:32.697 Set Features Save Field: Not Supported 00:32:32.697 Reservations: Not Supported 00:32:32.697 Timestamp: Not Supported 00:32:32.697 Copy: Not Supported 00:32:32.697 Volatile Write Cache: Present 00:32:32.697 Atomic Write Unit (Normal): 1 00:32:32.697 Atomic Write Unit (PFail): 1 00:32:32.697 Atomic Compare & Write Unit: 1 00:32:32.697 Fused Compare & Write: Not Supported 00:32:32.697 Scatter-Gather List 00:32:32.697 SGL Command Set: Supported 00:32:32.697 SGL Keyed: Supported 00:32:32.697 SGL Bit Bucket Descriptor: Not Supported 00:32:32.697 SGL Metadata Pointer: Not Supported 00:32:32.697 Oversized SGL: Not Supported 00:32:32.697 SGL Metadata Address: Not Supported 00:32:32.697 SGL Offset: Supported 00:32:32.697 Transport SGL Data Block: Not Supported 00:32:32.697 Replay Protected Memory Block: Not Supported 00:32:32.697 00:32:32.697 Firmware Slot Information 00:32:32.697 ========================= 00:32:32.697 Active slot: 0 00:32:32.697 00:32:32.697 Asymmetric Namespace Access 00:32:32.697 =========================== 00:32:32.697 Change Count : 0 00:32:32.697 Number of ANA Group Descriptors : 1 00:32:32.697 ANA Group Descriptor : 0 00:32:32.697 ANA Group ID : 1 00:32:32.697 Number of NSID Values : 1 00:32:32.697 Change Count : 0 00:32:32.697 ANA State : 1 00:32:32.697 Namespace Identifier : 1 00:32:32.697 00:32:32.697 Commands Supported and Effects 00:32:32.697 ============================== 00:32:32.697 Admin Commands 00:32:32.697 -------------- 00:32:32.697 Get Log Page (02h): Supported 00:32:32.697 Identify (06h): Supported 00:32:32.697 Abort (08h): Supported 00:32:32.697 Set Features (09h): Supported 00:32:32.697 Get Features (0Ah): Supported 00:32:32.697 Asynchronous Event Request (0Ch): Supported 00:32:32.697 Keep Alive (18h): Supported 00:32:32.698 I/O Commands 00:32:32.698 ------------ 00:32:32.698 Flush (00h): Supported 00:32:32.698 Write (01h): Supported LBA-Change 00:32:32.698 Read (02h): Supported 00:32:32.698 Write Zeroes (08h): Supported LBA-Change 00:32:32.698 Dataset Management (09h): Supported 00:32:32.698 00:32:32.698 Error Log 00:32:32.698 ========= 00:32:32.698 Entry: 0 00:32:32.698 Error Count: 0x3 00:32:32.698 Submission Queue Id: 0x0 00:32:32.698 Command Id: 0x5 00:32:32.698 Phase Bit: 0 00:32:32.698 Status Code: 0x2 00:32:32.698 Status Code Type: 0x0 00:32:32.698 Do Not Retry: 1 00:32:32.698 Error Location: 0x28 00:32:32.698 LBA: 0x0 00:32:32.698 Namespace: 0x0 00:32:32.698 Vendor Log Page: 0x0 00:32:32.698 ----------- 00:32:32.698 Entry: 1 00:32:32.698 Error Count: 0x2 00:32:32.698 Submission Queue Id: 0x0 00:32:32.698 Command Id: 0x5 00:32:32.698 Phase Bit: 0 00:32:32.698 Status Code: 0x2 00:32:32.698 Status Code Type: 0x0 00:32:32.698 Do Not Retry: 1 00:32:32.698 Error Location: 0x28 00:32:32.698 LBA: 0x0 00:32:32.698 Namespace: 0x0 00:32:32.698 Vendor Log Page: 0x0 00:32:32.698 ----------- 00:32:32.698 Entry: 2 00:32:32.698 Error Count: 0x1 00:32:32.698 Submission Queue Id: 0x0 00:32:32.698 Command Id: 0x0 00:32:32.698 Phase Bit: 0 00:32:32.698 Status Code: 0x2 00:32:32.698 Status Code Type: 0x0 00:32:32.698 Do Not Retry: 1 00:32:32.698 Error Location: 0x28 00:32:32.698 LBA: 0x0 00:32:32.698 Namespace: 0x0 00:32:32.698 Vendor Log Page: 0x0 00:32:32.698 00:32:32.698 Number of Queues 00:32:32.698 ================ 00:32:32.698 Number of I/O Submission Queues: 128 00:32:32.698 Number of I/O Completion Queues: 128 00:32:32.698 00:32:32.698 ZNS Specific Controller Data 00:32:32.698 ============================ 00:32:32.698 Zone Append Size Limit: 0 00:32:32.698 00:32:32.698 00:32:32.698 Active Namespaces 00:32:32.698 ================= 00:32:32.698 get_feature(0x05) failed 00:32:32.698 Namespace ID:1 00:32:32.698 Command Set Identifier: NVM (00h) 00:32:32.698 Deallocate: Supported 00:32:32.698 Deallocated/Unwritten Error: Not Supported 00:32:32.698 Deallocated Read Value: Unknown 00:32:32.698 Deallocate in Write Zeroes: Not Supported 00:32:32.698 Deallocated Guard Field: 0xFFFF 00:32:32.698 Flush: Supported 00:32:32.698 Reservation: Not Supported 00:32:32.698 Namespace Sharing Capabilities: Multiple Controllers 00:32:32.698 Size (in LBAs): 3907029168 (1863GiB) 00:32:32.698 Capacity (in LBAs): 3907029168 (1863GiB) 00:32:32.698 Utilization (in LBAs): 3907029168 (1863GiB) 00:32:32.698 UUID: 9ecb9e1a-050c-420e-9190-575dbd5d0463 00:32:32.698 Thin Provisioning: Not Supported 00:32:32.698 Per-NS Atomic Units: Yes 00:32:32.698 Atomic Boundary Size (Normal): 0 00:32:32.698 Atomic Boundary Size (PFail): 0 00:32:32.698 Atomic Boundary Offset: 0 00:32:32.698 NGUID/EUI64 Never Reused: No 00:32:32.698 ANA group ID: 1 00:32:32.698 Namespace Write Protected: No 00:32:32.698 Number of LBA Formats: 1 00:32:32.698 Current LBA Format: LBA Format #00 00:32:32.698 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:32.698 00:32:32.698 19:27:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:32.698 19:27:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:32.698 19:27:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:32:32.698 19:27:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:32:32.698 19:27:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:32:32.698 19:27:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:32:32.698 19:27:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:32.698 19:27:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:32:32.957 rmmod nvme_rdma 00:32:32.958 rmmod nvme_fabrics 00:32:32.958 19:27:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:32.958 19:27:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:32:32.958 19:27:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:32:32.958 19:27:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:32.958 19:27:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:32.958 19:27:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:32:32.958 19:27:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:32.958 19:27:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:32.958 19:27:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:32:32.958 19:27:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:32.958 19:27:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:32.958 19:27:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:32.958 19:27:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:32.958 19:27:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:32:32.958 19:27:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:32:32.958 19:27:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:32:36.248 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:36.248 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:36.507 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:36.507 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:36.507 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:36.507 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:36.507 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:36.507 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:36.507 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:36.507 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:36.507 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:36.507 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:36.507 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:36.507 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:36.507 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:36.507 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:38.412 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:32:38.412 00:32:38.412 real 0m18.758s 00:32:38.412 user 0m5.175s 00:32:38.412 sys 0m11.005s 00:32:38.412 19:27:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:38.412 19:27:12 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:38.412 ************************************ 00:32:38.412 END TEST nvmf_identify_kernel_target 00:32:38.412 ************************************ 00:32:38.671 19:27:12 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:32:38.671 19:27:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:38.671 19:27:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:38.671 19:27:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.671 ************************************ 00:32:38.671 START TEST nvmf_auth_host 00:32:38.671 ************************************ 00:32:38.671 19:27:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:32:38.671 * Looking for test storage... 00:32:38.671 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:32:38.671 19:27:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:38.671 19:27:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:32:38.671 19:27:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:38.671 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:38.671 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:38.671 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:38.671 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:38.671 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:38.671 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:38.671 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:38.671 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:38.671 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:38.671 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:38.671 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:38.671 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:38.671 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:32:38.671 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:32:38.671 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:38.671 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:38.671 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:32:38.930 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:32:38.930 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:38.930 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:32:38.930 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:38.930 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:32:38.930 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:32:38.930 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:38.930 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:32:38.930 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:38.930 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:38.930 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:38.930 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:32:38.930 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:38.930 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:38.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.930 --rc genhtml_branch_coverage=1 00:32:38.930 --rc genhtml_function_coverage=1 00:32:38.930 --rc genhtml_legend=1 00:32:38.930 --rc geninfo_all_blocks=1 00:32:38.930 --rc geninfo_unexecuted_blocks=1 00:32:38.930 00:32:38.930 ' 00:32:38.930 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:38.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.930 --rc genhtml_branch_coverage=1 00:32:38.930 --rc genhtml_function_coverage=1 00:32:38.930 --rc genhtml_legend=1 00:32:38.930 --rc geninfo_all_blocks=1 00:32:38.930 --rc geninfo_unexecuted_blocks=1 00:32:38.930 00:32:38.930 ' 00:32:38.930 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:38.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.930 --rc genhtml_branch_coverage=1 00:32:38.931 --rc genhtml_function_coverage=1 00:32:38.931 --rc genhtml_legend=1 00:32:38.931 --rc geninfo_all_blocks=1 00:32:38.931 --rc geninfo_unexecuted_blocks=1 00:32:38.931 00:32:38.931 ' 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:38.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.931 --rc genhtml_branch_coverage=1 00:32:38.931 --rc genhtml_function_coverage=1 00:32:38.931 --rc genhtml_legend=1 00:32:38.931 --rc geninfo_all_blocks=1 00:32:38.931 --rc geninfo_unexecuted_blocks=1 00:32:38.931 00:32:38.931 ' 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:38.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:32:38.931 19:27:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:32:47.071 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:32:47.071 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:32:47.071 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:32:47.072 Found net devices under 0000:d9:00.0: mlx_0_0 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:32:47.072 Found net devices under 0000:d9:00.1: mlx_0_1 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # rdma_device_init 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:32:47.072 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:47.072 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:32:47.072 altname enp217s0f0np0 00:32:47.072 altname ens818f0np0 00:32:47.072 inet 192.168.100.8/24 scope global mlx_0_0 00:32:47.072 valid_lft forever preferred_lft forever 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:32:47.072 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:47.072 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:32:47.072 altname enp217s0f1np1 00:32:47.072 altname ens818f1np1 00:32:47.072 inet 192.168.100.9/24 scope global mlx_0_1 00:32:47.072 valid_lft forever preferred_lft forever 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:32:47.072 192.168.100.9' 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:32:47.072 192.168.100.9' 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # head -n 1 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:32:47.072 192.168.100.9' 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # tail -n +2 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # head -n 1 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:32:47.072 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=489346 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 489346 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 489346 ']' 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9b2c18e4e001b38cf7b607cd6ece9265 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.fdI 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9b2c18e4e001b38cf7b607cd6ece9265 0 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9b2c18e4e001b38cf7b607cd6ece9265 0 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9b2c18e4e001b38cf7b607cd6ece9265 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.fdI 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.fdI 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.fdI 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f09af156d5139d071239c77aac529181eaefe7c617d551e98166b59dd0606a88 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ISj 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f09af156d5139d071239c77aac529181eaefe7c617d551e98166b59dd0606a88 3 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f09af156d5139d071239c77aac529181eaefe7c617d551e98166b59dd0606a88 3 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f09af156d5139d071239c77aac529181eaefe7c617d551e98166b59dd0606a88 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ISj 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ISj 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.ISj 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=94da9da75f92f9aece20912cc79f7a7035cacec4b4c9bc3a 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.4rS 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 94da9da75f92f9aece20912cc79f7a7035cacec4b4c9bc3a 0 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 94da9da75f92f9aece20912cc79f7a7035cacec4b4c9bc3a 0 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=94da9da75f92f9aece20912cc79f7a7035cacec4b4c9bc3a 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.4rS 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.4rS 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.4rS 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fe054eda78eab2e3ea09c7eb7ab5f1da9377211613684434 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.zi3 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fe054eda78eab2e3ea09c7eb7ab5f1da9377211613684434 2 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fe054eda78eab2e3ea09c7eb7ab5f1da9377211613684434 2 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fe054eda78eab2e3ea09c7eb7ab5f1da9377211613684434 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.zi3 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.zi3 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.zi3 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7ee893ffe1ecd152ae88d06f2df37a50 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.tbh 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7ee893ffe1ecd152ae88d06f2df37a50 1 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7ee893ffe1ecd152ae88d06f2df37a50 1 00:32:47.073 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:47.074 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:47.074 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7ee893ffe1ecd152ae88d06f2df37a50 00:32:47.074 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:32:47.074 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:47.074 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.tbh 00:32:47.074 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.tbh 00:32:47.074 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.tbh 00:32:47.074 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:47.074 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:47.074 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:47.074 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:47.074 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:32:47.074 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:32:47.074 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:47.074 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7d8ff73ba16f45812061b7eb0c2a4eb4 00:32:47.074 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:32:47.074 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Snj 00:32:47.074 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7d8ff73ba16f45812061b7eb0c2a4eb4 1 00:32:47.074 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7d8ff73ba16f45812061b7eb0c2a4eb4 1 00:32:47.074 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:47.074 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:47.074 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7d8ff73ba16f45812061b7eb0c2a4eb4 00:32:47.074 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:32:47.074 19:27:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Snj 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Snj 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Snj 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f78997b645d6c2a850deb8fd5079e7b3a31a34f81da57c91 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.6IC 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f78997b645d6c2a850deb8fd5079e7b3a31a34f81da57c91 2 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f78997b645d6c2a850deb8fd5079e7b3a31a34f81da57c91 2 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f78997b645d6c2a850deb8fd5079e7b3a31a34f81da57c91 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.6IC 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.6IC 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.6IC 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2c15cd6ade13ad5d6ea24db113b4f848 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.8sM 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2c15cd6ade13ad5d6ea24db113b4f848 0 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2c15cd6ade13ad5d6ea24db113b4f848 0 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2c15cd6ade13ad5d6ea24db113b4f848 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.8sM 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.8sM 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.8sM 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ddce238d78fd02c897597fd03193136f7a34c11e9e348d177869c5b6e274a0b3 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Ni3 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ddce238d78fd02c897597fd03193136f7a34c11e9e348d177869c5b6e274a0b3 3 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ddce238d78fd02c897597fd03193136f7a34c11e9e348d177869c5b6e274a0b3 3 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ddce238d78fd02c897597fd03193136f7a34c11e9e348d177869c5b6e274a0b3 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Ni3 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Ni3 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Ni3 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 489346 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 489346 ']' 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:47.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fdI 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.ISj ]] 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ISj 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.074 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.4rS 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.zi3 ]] 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zi3 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.tbh 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Snj ]] 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Snj 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.6IC 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.8sM ]] 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.8sM 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Ni3 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:47.334 19:27:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:32:50.623 Waiting for block devices as requested 00:32:50.623 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:50.623 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:50.881 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:50.881 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:50.881 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:50.881 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:51.140 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:51.140 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:51.140 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:51.398 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:51.398 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:51.398 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:51.657 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:51.657 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:51.658 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:51.658 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:51.916 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:32:52.485 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:32:52.485 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:52.485 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:32:52.485 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:32:52.485 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:52.485 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:32:52.485 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:32:52.485 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:32:52.485 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:52.744 No valid GPT data, bailing 00:32:52.744 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:52.744 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:32:52.744 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:32:52.744 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:32:52.744 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:32:52.744 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:52.744 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:52.744 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:52.744 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:52.744 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:32:52.744 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:32:52.744 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:32:52.744 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:32:52.744 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo rdma 00:32:52.744 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:32:52.744 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:32:52.744 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:52.744 19:27:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:32:52.744 00:32:52.744 Discovery Log Number of Records 2, Generation counter 2 00:32:52.744 =====Discovery Log Entry 0====== 00:32:52.744 trtype: rdma 00:32:52.744 adrfam: ipv4 00:32:52.744 subtype: current discovery subsystem 00:32:52.744 treq: not specified, sq flow control disable supported 00:32:52.744 portid: 1 00:32:52.744 trsvcid: 4420 00:32:52.744 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:52.744 traddr: 192.168.100.8 00:32:52.744 eflags: none 00:32:52.744 rdma_prtype: not specified 00:32:52.744 rdma_qptype: connected 00:32:52.744 rdma_cms: rdma-cm 00:32:52.744 rdma_pkey: 0x0000 00:32:52.744 =====Discovery Log Entry 1====== 00:32:52.744 trtype: rdma 00:32:52.744 adrfam: ipv4 00:32:52.744 subtype: nvme subsystem 00:32:52.744 treq: not specified, sq flow control disable supported 00:32:52.744 portid: 1 00:32:52.744 trsvcid: 4420 00:32:52.744 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:52.744 traddr: 192.168.100.8 00:32:52.744 eflags: none 00:32:52.744 rdma_prtype: not specified 00:32:52.744 rdma_qptype: connected 00:32:52.744 rdma_cms: rdma-cm 00:32:52.744 rdma_pkey: 0x0000 00:32:52.744 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:52.744 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:52.744 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:52.744 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:52.744 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.744 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:52.744 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:52.745 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:52.745 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:32:52.745 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:32:52.745 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:52.745 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: ]] 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.004 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.263 nvme0n1 00:32:53.263 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.263 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.263 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.263 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.263 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.263 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.263 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.263 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.263 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.263 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.263 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.263 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:53.263 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:53.263 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.263 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:53.263 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.263 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: ]] 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.264 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.523 nvme0n1 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: ]] 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:53.523 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:53.524 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.524 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:53.524 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.524 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.524 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.524 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.524 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:53.524 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:53.524 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:53.524 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.524 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.524 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:53.524 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:53.524 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:53.524 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:53.524 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:53.524 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:53.524 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.524 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.783 nvme0n1 00:32:53.783 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.783 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.783 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.783 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.783 19:27:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: ]] 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.783 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.784 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.784 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:53.784 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:53.784 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:53.784 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.784 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.784 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:53.784 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:53.784 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:53.784 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:53.784 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:53.784 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:53.784 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.784 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.043 nvme0n1 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: ]] 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.043 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:54.044 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.044 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.044 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.044 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.044 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:54.044 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:54.044 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:54.044 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.044 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.044 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:54.044 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:54.044 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:54.044 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:54.044 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:54.044 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:54.044 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.044 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.303 nvme0n1 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.303 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.563 nvme0n1 00:32:54.563 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.563 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.563 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.563 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.563 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.563 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.563 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.563 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.563 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.563 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.563 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.563 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:54.563 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.563 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:54.563 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.563 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.563 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:54.563 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:54.563 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:32:54.563 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:32:54.563 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.822 19:27:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: ]] 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.822 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.082 nvme0n1 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: ]] 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:55.082 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.340 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:55.340 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:55.340 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:55.340 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.340 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:55.340 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.340 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.340 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.340 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.340 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:55.340 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:55.340 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:55.340 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.340 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.340 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:55.340 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:55.340 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:55.340 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:55.340 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:55.341 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:55.341 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.341 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.341 nvme0n1 00:32:55.341 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.341 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.341 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.341 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.341 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.341 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: ]] 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.600 19:27:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.859 nvme0n1 00:32:55.859 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.859 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.859 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.859 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.859 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.859 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.859 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.859 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.859 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.859 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.859 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.859 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.859 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:55.859 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.859 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.859 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:55.859 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:55.859 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:32:55.859 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:32:55.859 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: ]] 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.860 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.119 nvme0n1 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:56.119 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:56.120 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:56.120 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.120 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.120 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:56.120 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:56.120 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:56.120 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:56.120 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:56.120 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:56.120 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.120 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.379 nvme0n1 00:32:56.379 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.379 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.379 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.379 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.379 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.379 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.379 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.379 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.379 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.379 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.379 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.379 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:56.379 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.379 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:56.379 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.379 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:56.379 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:56.379 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:56.379 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:32:56.379 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:32:56.379 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:56.379 19:27:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: ]] 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.948 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.207 nvme0n1 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: ]] 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.207 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.775 nvme0n1 00:32:57.775 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.775 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: ]] 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.776 19:27:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.035 nvme0n1 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: ]] 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.035 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.036 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:58.036 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:58.036 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:58.036 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.036 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.036 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:58.036 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:58.036 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:58.036 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:58.036 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:58.036 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:58.036 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.036 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.294 nvme0n1 00:32:58.294 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.294 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.294 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.294 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.294 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.294 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.553 19:27:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.811 nvme0n1 00:32:58.811 19:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.811 19:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.811 19:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.811 19:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.811 19:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.811 19:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.811 19:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.811 19:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.811 19:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.811 19:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.811 19:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.811 19:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:58.812 19:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.812 19:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:58.812 19:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.812 19:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:58.812 19:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:58.812 19:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:58.812 19:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:32:58.812 19:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:32:58.812 19:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:58.812 19:27:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:00.187 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:33:00.187 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: ]] 00:33:00.187 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:33:00.187 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:33:00.187 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.187 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:00.187 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:00.187 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:00.187 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.187 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:00.187 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.187 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.187 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.187 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.187 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:00.187 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:00.187 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:00.187 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.187 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.187 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:00.187 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:00.187 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:00.188 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:00.188 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:00.188 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:00.188 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.188 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.446 nvme0n1 00:33:00.446 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.446 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.446 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.446 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.446 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.446 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.446 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.446 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.446 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.446 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.705 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: ]] 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.706 19:27:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.964 nvme0n1 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: ]] 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.964 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.222 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.222 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.222 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:01.222 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:01.222 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:01.222 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.222 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.222 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:01.222 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:01.222 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:01.222 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:01.222 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:01.222 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:01.222 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.222 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.481 nvme0n1 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: ]] 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.481 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.740 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:01.740 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:01.740 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:01.740 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.740 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.740 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:01.740 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:01.740 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:01.740 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:01.740 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:01.740 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:01.740 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.740 19:27:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.999 nvme0n1 00:33:01.999 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.999 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.999 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.999 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.999 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.999 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.999 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.999 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.999 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.999 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.999 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.999 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.999 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:33:01.999 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.999 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:01.999 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:01.999 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:01.999 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:33:01.999 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:01.999 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:01.999 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.000 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.568 nvme0n1 00:33:02.568 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.568 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.568 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.568 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.568 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.568 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.568 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.568 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.568 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.568 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.568 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.568 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:02.568 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.568 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:33:02.568 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.568 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:02.568 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:02.568 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:02.568 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:33:02.568 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:33:02.568 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:02.568 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:02.568 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: ]] 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.569 19:27:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.136 nvme0n1 00:33:03.136 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.136 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.136 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.136 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.136 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.136 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: ]] 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:03.395 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.396 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.396 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:03.396 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:03.396 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:03.396 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:03.396 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:03.396 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:03.396 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.396 19:27:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.963 nvme0n1 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: ]] 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.963 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.532 nvme0n1 00:33:04.532 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.532 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.532 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.532 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.532 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.532 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.532 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.532 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.532 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.532 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.791 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.791 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.791 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:33:04.791 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.791 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:04.791 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:04.791 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:04.791 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:33:04.791 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:33:04.791 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:04.791 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:04.791 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:33:04.791 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: ]] 00:33:04.791 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:33:04.791 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:33:04.791 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.791 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:04.791 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:04.791 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:04.791 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.792 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:04.792 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.792 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.792 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.792 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.792 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:04.792 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:04.792 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:04.792 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.792 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.792 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:04.792 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:04.792 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:04.792 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:04.792 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:04.792 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:04.792 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.792 19:27:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.360 nvme0n1 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.360 19:27:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.929 nvme0n1 00:33:05.929 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.929 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.929 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.929 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.929 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.929 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: ]] 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.188 nvme0n1 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.188 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: ]] 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.448 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.708 nvme0n1 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: ]] 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.708 19:27:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.968 nvme0n1 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: ]] 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.968 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.228 nvme0n1 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:07.228 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:07.229 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:07.229 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:07.229 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:07.229 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:07.229 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.229 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.488 nvme0n1 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: ]] 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.488 19:27:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.748 nvme0n1 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: ]] 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.748 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.749 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.749 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.749 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:07.749 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:07.749 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:07.749 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.749 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.749 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:07.749 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:07.749 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:07.749 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:07.749 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:08.008 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:08.008 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.008 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.008 nvme0n1 00:33:08.008 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.008 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.008 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.008 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.008 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.008 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.267 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.267 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.267 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.267 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.267 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.267 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.267 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:33:08.267 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.267 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:08.267 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:08.267 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: ]] 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.268 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.527 nvme0n1 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: ]] 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.527 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.786 nvme0n1 00:33:08.786 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.786 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.786 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.787 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.787 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.787 19:27:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.787 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.046 nvme0n1 00:33:09.046 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.046 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.046 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.046 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.046 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.046 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.046 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.046 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.046 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.046 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.046 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.046 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:09.046 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.046 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:33:09.046 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.046 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:09.046 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:09.046 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:09.046 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:33:09.046 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:33:09.046 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:09.046 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:09.046 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:33:09.046 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: ]] 00:33:09.047 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:33:09.047 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:33:09.047 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.047 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:09.047 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:09.047 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:09.047 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.047 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:09.047 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.047 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.047 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.047 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.047 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:09.047 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:09.047 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:09.047 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.047 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.047 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:09.047 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:09.047 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:09.047 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:09.047 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:09.047 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:09.047 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.047 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.306 nvme0n1 00:33:09.306 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.306 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.306 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.306 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.306 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.565 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.565 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.565 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.565 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.565 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.565 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.565 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.565 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:33:09.565 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.565 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:09.565 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:09.565 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:09.565 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:09.565 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:09.565 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:09.565 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:09.565 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:09.565 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: ]] 00:33:09.566 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:09.566 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:33:09.566 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.566 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:09.566 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:09.566 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:09.566 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.566 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:09.566 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.566 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.566 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.566 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.566 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:09.566 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:09.566 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:09.566 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.566 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.566 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:09.566 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:09.566 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:09.566 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:09.566 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:09.566 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:09.566 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.566 19:27:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.825 nvme0n1 00:33:09.825 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.825 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.825 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.825 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.825 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.825 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.825 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: ]] 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.826 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.085 nvme0n1 00:33:10.085 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.085 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.085 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.085 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.085 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: ]] 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.345 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.605 nvme0n1 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.605 19:27:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.866 nvme0n1 00:33:10.866 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.866 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.866 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.866 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.866 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: ]] 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.125 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.384 nvme0n1 00:33:11.384 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.384 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.384 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.384 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.384 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.384 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: ]] 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.643 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:11.644 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:11.644 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:11.644 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.644 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.644 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:11.644 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:11.644 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:11.644 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:11.644 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:11.644 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:11.644 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.644 19:27:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.903 nvme0n1 00:33:11.903 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.903 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.903 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.903 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.903 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.903 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: ]] 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:12.162 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:12.163 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:12.163 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.163 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.422 nvme0n1 00:33:12.422 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.422 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.422 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.422 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.422 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.422 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.422 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.422 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.422 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.422 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: ]] 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:12.681 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.682 19:27:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.941 nvme0n1 00:33:12.941 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.941 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.941 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.941 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.941 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.941 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.200 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.459 nvme0n1 00:33:13.459 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.459 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.459 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.459 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.459 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.459 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.459 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.459 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.459 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.459 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: ]] 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.719 19:27:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.287 nvme0n1 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: ]] 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.287 19:27:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.855 nvme0n1 00:33:14.856 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.856 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.856 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.856 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.856 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.856 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.856 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.856 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.856 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.856 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: ]] 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.115 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.684 nvme0n1 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: ]] 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.684 19:27:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.252 nvme0n1 00:33:16.252 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.252 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.252 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.252 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.252 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.252 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.252 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.252 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.252 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.252 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.252 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.252 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.252 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:33:16.252 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.252 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:16.252 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:16.252 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:16.252 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.512 19:27:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.081 nvme0n1 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: ]] 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.081 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.341 nvme0n1 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: ]] 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.341 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.601 nvme0n1 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: ]] 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:33:17.601 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.602 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:17.602 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:17.602 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:17.602 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.602 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:17.602 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.602 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.602 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.602 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.602 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:17.602 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:17.602 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:17.602 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.602 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.602 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:17.602 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:17.602 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:17.602 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:17.602 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:17.602 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:17.602 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.602 19:27:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.861 nvme0n1 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: ]] 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.861 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.120 nvme0n1 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:18.120 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.121 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:18.121 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.121 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.379 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.379 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.379 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:18.379 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:18.380 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:18.380 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.380 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.380 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:18.380 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:18.380 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:18.380 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:18.380 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:18.380 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:18.380 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.380 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.380 nvme0n1 00:33:18.380 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.380 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.380 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.380 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.380 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.380 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.380 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.380 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.380 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.380 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: ]] 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.639 19:27:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.639 nvme0n1 00:33:18.639 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.898 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.898 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.898 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.898 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.898 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.898 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.898 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.898 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: ]] 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.899 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.158 nvme0n1 00:33:19.158 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.158 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.158 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.158 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.158 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.158 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.158 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.158 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.158 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.158 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.158 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.158 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.158 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:19.158 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.158 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:19.158 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:19.158 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:19.158 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:19.158 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:19.158 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:19.158 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:19.158 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:19.158 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: ]] 00:33:19.158 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:19.159 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:19.159 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.159 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:19.159 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:19.159 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:19.159 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.159 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:19.159 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.159 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.159 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.159 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.159 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:19.159 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:19.159 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:19.159 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.159 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.159 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:19.159 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:19.159 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:19.159 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:19.159 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:19.159 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:19.159 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.159 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.418 nvme0n1 00:33:19.418 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.418 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.418 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.418 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.418 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.418 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.418 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.418 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.418 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.418 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.418 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.418 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.418 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:19.418 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.418 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:19.418 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:19.418 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:19.418 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:33:19.418 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: ]] 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.419 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.678 nvme0n1 00:33:19.678 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.678 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.678 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.678 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.678 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.678 19:27:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.678 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.678 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.678 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.678 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.938 nvme0n1 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.938 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: ]] 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.197 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.457 nvme0n1 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: ]] 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.457 19:27:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.717 nvme0n1 00:33:20.717 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.717 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.717 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.717 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.717 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.717 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: ]] 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.976 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.235 nvme0n1 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: ]] 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:21.235 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.236 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:21.236 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:21.236 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:21.236 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.236 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:21.236 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.236 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.236 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.236 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.236 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:21.236 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:21.236 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:21.236 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.236 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.236 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:21.236 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:21.236 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:21.236 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:21.236 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:21.236 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:21.236 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.236 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.495 nvme0n1 00:33:21.495 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.495 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.495 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.495 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.495 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.495 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.754 19:27:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.013 nvme0n1 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: ]] 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.013 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.582 nvme0n1 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: ]] 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.582 19:27:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.150 nvme0n1 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: ]] 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.150 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.409 nvme0n1 00:33:23.409 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.409 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.409 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.409 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.409 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.409 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.409 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.409 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.409 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.409 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.409 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.409 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: ]] 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.668 19:27:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.928 nvme0n1 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:23.928 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:24.188 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:24.188 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.188 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.447 nvme0n1 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWIyYzE4ZTRlMDAxYjM4Y2Y3YjYwN2NkNmVjZTkyNjWqwKLD: 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: ]] 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjA5YWYxNTZkNTEzOWQwNzEyMzljNzdhYWM1MjkxODFlYWVmZTdjNjE3ZDU1MWU5ODE2NmI1OWRkMDYwNmE4OIYqa/E=: 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.447 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.707 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.707 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:24.707 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:24.707 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:24.707 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.707 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.707 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:24.707 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:24.707 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:24.707 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:24.707 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:24.707 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:24.707 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.707 19:27:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.275 nvme0n1 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: ]] 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.275 19:27:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.842 nvme0n1 00:33:25.842 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.842 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.842 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.842 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.842 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.842 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.842 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.842 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.842 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.842 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.842 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.842 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:25.842 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:25.842 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.842 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:25.842 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:25.842 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:25.842 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:25.842 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:25.842 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:25.842 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: ]] 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.843 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.779 nvme0n1 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Zjc4OTk3YjY0NWQ2YzJhODUwZGViOGZkNTA3OWU3YjNhMzFhMzRmODFkYTU3Yzkx8YGPhg==: 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: ]] 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmMxNWNkNmFkZTEzYWQ1ZDZlYTI0ZGIxMTNiNGY4NDgFG5N9: 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.780 19:28:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.348 nvme0n1 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRjZTIzOGQ3OGZkMDJjODk3NTk3ZmQwMzE5MzEzNmY3YTM0YzExZTllMzQ4ZDE3Nzg2OWM1YjZlMjc0YTBiM9snl+U=: 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.348 19:28:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.916 nvme0n1 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: ]] 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:27.916 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.176 request: 00:33:28.176 { 00:33:28.176 "name": "nvme0", 00:33:28.176 "trtype": "rdma", 00:33:28.176 "traddr": "192.168.100.8", 00:33:28.176 "adrfam": "ipv4", 00:33:28.176 "trsvcid": "4420", 00:33:28.176 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:28.176 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:28.176 "prchk_reftag": false, 00:33:28.176 "prchk_guard": false, 00:33:28.176 "hdgst": false, 00:33:28.176 "ddgst": false, 00:33:28.176 "allow_unrecognized_csi": false, 00:33:28.176 "method": "bdev_nvme_attach_controller", 00:33:28.176 "req_id": 1 00:33:28.176 } 00:33:28.176 Got JSON-RPC error response 00:33:28.176 response: 00:33:28.176 { 00:33:28.176 "code": -5, 00:33:28.176 "message": "Input/output error" 00:33:28.176 } 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:28.176 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:28.177 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:28.177 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:33:28.177 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:28.177 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:28.177 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:28.177 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:28.177 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:28.177 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:28.177 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.177 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.177 request: 00:33:28.177 { 00:33:28.177 "name": "nvme0", 00:33:28.177 "trtype": "rdma", 00:33:28.177 "traddr": "192.168.100.8", 00:33:28.177 "adrfam": "ipv4", 00:33:28.177 "trsvcid": "4420", 00:33:28.177 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:28.177 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:28.177 "prchk_reftag": false, 00:33:28.177 "prchk_guard": false, 00:33:28.177 "hdgst": false, 00:33:28.177 "ddgst": false, 00:33:28.177 "dhchap_key": "key2", 00:33:28.177 "allow_unrecognized_csi": false, 00:33:28.177 "method": "bdev_nvme_attach_controller", 00:33:28.177 "req_id": 1 00:33:28.177 } 00:33:28.177 Got JSON-RPC error response 00:33:28.177 response: 00:33:28.177 { 00:33:28.177 "code": -5, 00:33:28.177 "message": "Input/output error" 00:33:28.177 } 00:33:28.177 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:28.177 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:33:28.177 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:28.177 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:28.177 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.436 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.436 request: 00:33:28.436 { 00:33:28.437 "name": "nvme0", 00:33:28.437 "trtype": "rdma", 00:33:28.437 "traddr": "192.168.100.8", 00:33:28.437 "adrfam": "ipv4", 00:33:28.437 "trsvcid": "4420", 00:33:28.437 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:28.437 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:28.437 "prchk_reftag": false, 00:33:28.437 "prchk_guard": false, 00:33:28.437 "hdgst": false, 00:33:28.437 "ddgst": false, 00:33:28.437 "dhchap_key": "key1", 00:33:28.437 "dhchap_ctrlr_key": "ckey2", 00:33:28.437 "allow_unrecognized_csi": false, 00:33:28.437 "method": "bdev_nvme_attach_controller", 00:33:28.437 "req_id": 1 00:33:28.437 } 00:33:28.437 Got JSON-RPC error response 00:33:28.437 response: 00:33:28.437 { 00:33:28.437 "code": -5, 00:33:28.437 "message": "Input/output error" 00:33:28.437 } 00:33:28.437 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:28.437 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:33:28.437 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:28.437 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:28.437 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:28.437 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:33:28.437 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:28.437 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:28.437 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:28.437 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:28.437 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:28.437 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:28.437 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:28.437 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:28.437 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:28.437 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:28.437 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:33:28.437 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.437 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.696 nvme0n1 00:33:28.696 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.696 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:28.696 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:28.696 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:28.696 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:28.696 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:28.696 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:28.696 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:28.696 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:28.696 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:28.696 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:28.696 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: ]] 00:33:28.696 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:28.696 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:28.696 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.696 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.696 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.696 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:33:28.696 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.696 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:33:28.696 19:28:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.696 19:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.696 19:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:28.696 19:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:28.696 19:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:33:28.696 19:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:28.696 19:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:28.696 19:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:28.696 19:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:28.696 19:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:28.696 19:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:28.696 19:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.696 19:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.955 request: 00:33:28.955 { 00:33:28.955 "name": "nvme0", 00:33:28.955 "dhchap_key": "key1", 00:33:28.955 "dhchap_ctrlr_key": "ckey2", 00:33:28.955 "method": "bdev_nvme_set_keys", 00:33:28.955 "req_id": 1 00:33:28.955 } 00:33:28.955 Got JSON-RPC error response 00:33:28.955 response: 00:33:28.955 { 00:33:28.955 "code": -13, 00:33:28.955 "message": "Permission denied" 00:33:28.955 } 00:33:28.955 19:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:28.956 19:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:33:28.956 19:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:28.956 19:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:28.956 19:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:28.956 19:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:33:28.956 19:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.956 19:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:33:28.956 19:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.956 19:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.956 19:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:33:28.956 19:28:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:33:29.893 19:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:33:29.893 19:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:33:29.893 19:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.893 19:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.893 19:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.893 19:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:33:29.893 19:28:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:33:30.829 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:33:30.829 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:33:30.829 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.829 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.088 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.088 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:33:31.088 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:31.088 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:31.088 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:31.088 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:31.088 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:31.088 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:31.088 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:31.088 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:31.088 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:31.088 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRkYTlkYTc1ZjkyZjlhZWNlMjA5MTJjYzc5ZjdhNzAzNWNhY2VjNGI0YzliYzNhEMuZ2g==: 00:33:31.088 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: ]] 00:33:31.088 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmUwNTRlZGE3OGVhYjJlM2VhMDljN2ViN2FiNWYxZGE5Mzc3MjExNjEzNjg0NDM0h0gn7Q==: 00:33:31.088 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.089 nvme0n1 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2VlODkzZmZlMWVjZDE1MmFlODhkMDZmMmRmMzdhNTDnZSEf: 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: ]] 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Q4ZmY3M2JhMTZmNDU4MTIwNjFiN2ViMGMyYTRlYjRaPW9L: 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.089 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.348 request: 00:33:31.348 { 00:33:31.348 "name": "nvme0", 00:33:31.348 "dhchap_key": "key2", 00:33:31.348 "dhchap_ctrlr_key": "ckey1", 00:33:31.348 "method": "bdev_nvme_set_keys", 00:33:31.348 "req_id": 1 00:33:31.348 } 00:33:31.348 Got JSON-RPC error response 00:33:31.348 response: 00:33:31.348 { 00:33:31.348 "code": -13, 00:33:31.348 "message": "Permission denied" 00:33:31.348 } 00:33:31.348 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:31.348 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:33:31.348 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:31.348 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:31.348 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:31.348 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:33:31.348 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:33:31.348 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.348 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.348 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.348 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:33:31.348 19:28:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:33:32.285 19:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:33:32.285 19:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:33:32.285 19:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.285 19:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:32.285 19:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.285 19:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:33:32.285 19:28:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:33:33.664 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:33:33.664 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:33:33.665 rmmod nvme_rdma 00:33:33.665 rmmod nvme_fabrics 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 489346 ']' 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 489346 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 489346 ']' 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 489346 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 489346 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 489346' 00:33:33.665 killing process with pid 489346 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 489346 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 489346 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:33:33.665 19:28:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:33:37.859 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:37.859 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:37.859 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:37.859 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:37.859 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:37.859 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:37.859 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:37.859 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:37.859 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:37.859 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:37.859 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:37.859 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:37.859 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:37.859 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:37.859 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:37.859 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:39.235 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:33:39.493 19:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.fdI /tmp/spdk.key-null.4rS /tmp/spdk.key-sha256.tbh /tmp/spdk.key-sha384.6IC /tmp/spdk.key-sha512.Ni3 /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:33:39.493 19:28:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:33:42.784 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:42.784 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:42.784 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:42.784 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:42.784 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:42.784 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:42.784 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:42.784 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:42.784 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:42.784 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:42.784 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:42.784 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:42.784 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:42.784 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:42.784 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:42.784 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:42.784 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:42.784 00:33:42.784 real 1m4.293s 00:33:42.784 user 0m57.432s 00:33:42.784 sys 0m16.398s 00:33:42.784 19:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:42.784 19:28:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.784 ************************************ 00:33:42.784 END TEST nvmf_auth_host 00:33:42.784 ************************************ 00:33:43.044 19:28:17 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:33:43.044 19:28:17 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:33:43.044 19:28:17 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:33:43.044 19:28:17 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:33:43.044 19:28:17 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:33:43.044 19:28:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:43.044 19:28:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:43.044 19:28:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.044 ************************************ 00:33:43.044 START TEST nvmf_bdevperf 00:33:43.044 ************************************ 00:33:43.044 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:33:43.044 * Looking for test storage... 00:33:43.044 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:33:43.044 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:43.044 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:33:43.044 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:43.304 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:43.304 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:43.304 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:43.304 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:43.304 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:33:43.304 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:33:43.304 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:33:43.304 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:33:43.304 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:33:43.304 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:33:43.304 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:33:43.304 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:43.304 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:33:43.304 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:33:43.304 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:43.304 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:43.304 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:43.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.305 --rc genhtml_branch_coverage=1 00:33:43.305 --rc genhtml_function_coverage=1 00:33:43.305 --rc genhtml_legend=1 00:33:43.305 --rc geninfo_all_blocks=1 00:33:43.305 --rc geninfo_unexecuted_blocks=1 00:33:43.305 00:33:43.305 ' 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:43.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.305 --rc genhtml_branch_coverage=1 00:33:43.305 --rc genhtml_function_coverage=1 00:33:43.305 --rc genhtml_legend=1 00:33:43.305 --rc geninfo_all_blocks=1 00:33:43.305 --rc geninfo_unexecuted_blocks=1 00:33:43.305 00:33:43.305 ' 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:43.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.305 --rc genhtml_branch_coverage=1 00:33:43.305 --rc genhtml_function_coverage=1 00:33:43.305 --rc genhtml_legend=1 00:33:43.305 --rc geninfo_all_blocks=1 00:33:43.305 --rc geninfo_unexecuted_blocks=1 00:33:43.305 00:33:43.305 ' 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:43.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.305 --rc genhtml_branch_coverage=1 00:33:43.305 --rc genhtml_function_coverage=1 00:33:43.305 --rc genhtml_legend=1 00:33:43.305 --rc geninfo_all_blocks=1 00:33:43.305 --rc geninfo_unexecuted_blocks=1 00:33:43.305 00:33:43.305 ' 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:43.305 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:33:43.305 19:28:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:51.430 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:33:51.431 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:33:51.431 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:33:51.431 Found net devices under 0000:d9:00.0: mlx_0_0 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:33:51.431 Found net devices under 0000:d9:00.1: mlx_0_1 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # rdma_device_init 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:33:51.431 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:51.431 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:33:51.431 altname enp217s0f0np0 00:33:51.431 altname ens818f0np0 00:33:51.431 inet 192.168.100.8/24 scope global mlx_0_0 00:33:51.431 valid_lft forever preferred_lft forever 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:33:51.431 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:51.431 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:33:51.431 altname enp217s0f1np1 00:33:51.431 altname ens818f1np1 00:33:51.431 inet 192.168.100.9/24 scope global mlx_0_1 00:33:51.431 valid_lft forever preferred_lft forever 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:51.431 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:33:51.432 192.168.100.9' 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:33:51.432 192.168.100.9' 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # head -n 1 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:33:51.432 192.168.100.9' 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # tail -n +2 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # head -n 1 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=504870 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 504870 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 504870 ']' 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:51.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.432 [2024-12-13 19:28:24.759955] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:33:51.432 [2024-12-13 19:28:24.760017] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:51.432 [2024-12-13 19:28:24.853329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:51.432 [2024-12-13 19:28:24.875455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:51.432 [2024-12-13 19:28:24.875493] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:51.432 [2024-12-13 19:28:24.875502] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:51.432 [2024-12-13 19:28:24.875510] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:51.432 [2024-12-13 19:28:24.875517] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:51.432 [2024-12-13 19:28:24.877093] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:33:51.432 [2024-12-13 19:28:24.877181] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:51.432 [2024-12-13 19:28:24.877182] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:51.432 19:28:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.432 [2024-12-13 19:28:25.046988] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8f9c40/0x8fe0f0) succeed. 00:33:51.432 [2024-12-13 19:28:25.056197] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8fb1e0/0x93f790) succeed. 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.432 Malloc0 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.432 [2024-12-13 19:28:25.200351] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:51.432 { 00:33:51.432 "params": { 00:33:51.432 "name": "Nvme$subsystem", 00:33:51.432 "trtype": "$TEST_TRANSPORT", 00:33:51.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:51.432 "adrfam": "ipv4", 00:33:51.432 "trsvcid": "$NVMF_PORT", 00:33:51.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:51.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:51.432 "hdgst": ${hdgst:-false}, 00:33:51.432 "ddgst": ${ddgst:-false} 00:33:51.432 }, 00:33:51.432 "method": "bdev_nvme_attach_controller" 00:33:51.432 } 00:33:51.432 EOF 00:33:51.432 )") 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:33:51.432 19:28:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:51.432 "params": { 00:33:51.432 "name": "Nvme1", 00:33:51.432 "trtype": "rdma", 00:33:51.432 "traddr": "192.168.100.8", 00:33:51.432 "adrfam": "ipv4", 00:33:51.432 "trsvcid": "4420", 00:33:51.432 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:51.432 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:51.432 "hdgst": false, 00:33:51.432 "ddgst": false 00:33:51.432 }, 00:33:51.432 "method": "bdev_nvme_attach_controller" 00:33:51.432 }' 00:33:51.432 [2024-12-13 19:28:25.253437] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:33:51.432 [2024-12-13 19:28:25.253484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid504943 ] 00:33:51.432 [2024-12-13 19:28:25.346291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.432 [2024-12-13 19:28:25.368705] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:51.432 Running I/O for 1 seconds... 00:33:52.368 17923.00 IOPS, 70.01 MiB/s 00:33:52.368 Latency(us) 00:33:52.368 [2024-12-13T18:28:26.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:52.368 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:52.368 Verification LBA range: start 0x0 length 0x4000 00:33:52.368 Nvme1n1 : 1.01 17967.08 70.18 0.00 0.00 7082.65 1966.08 10695.48 00:33:52.368 [2024-12-13T18:28:26.746Z] =================================================================================================================== 00:33:52.368 [2024-12-13T18:28:26.746Z] Total : 17967.08 70.18 0.00 0.00 7082.65 1966.08 10695.48 00:33:52.368 19:28:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=505178 00:33:52.368 19:28:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:52.368 19:28:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:52.368 19:28:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:52.368 19:28:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:33:52.368 19:28:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:33:52.368 19:28:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:52.368 19:28:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:52.368 { 00:33:52.368 "params": { 00:33:52.368 "name": "Nvme$subsystem", 00:33:52.368 "trtype": "$TEST_TRANSPORT", 00:33:52.368 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:52.368 "adrfam": "ipv4", 00:33:52.368 "trsvcid": "$NVMF_PORT", 00:33:52.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:52.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:52.368 "hdgst": ${hdgst:-false}, 00:33:52.368 "ddgst": ${ddgst:-false} 00:33:52.368 }, 00:33:52.368 "method": "bdev_nvme_attach_controller" 00:33:52.368 } 00:33:52.368 EOF 00:33:52.368 )") 00:33:52.368 19:28:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:33:52.368 19:28:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:33:52.628 19:28:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:33:52.628 19:28:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:52.628 "params": { 00:33:52.628 "name": "Nvme1", 00:33:52.628 "trtype": "rdma", 00:33:52.628 "traddr": "192.168.100.8", 00:33:52.628 "adrfam": "ipv4", 00:33:52.628 "trsvcid": "4420", 00:33:52.628 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:52.628 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:52.628 "hdgst": false, 00:33:52.628 "ddgst": false 00:33:52.628 }, 00:33:52.628 "method": "bdev_nvme_attach_controller" 00:33:52.628 }' 00:33:52.628 [2024-12-13 19:28:26.773981] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:33:52.628 [2024-12-13 19:28:26.774038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid505178 ] 00:33:52.628 [2024-12-13 19:28:26.865256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:52.628 [2024-12-13 19:28:26.885265] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:52.887 Running I/O for 15 seconds... 00:33:54.760 17920.00 IOPS, 70.00 MiB/s [2024-12-13T18:28:30.074Z] 17984.00 IOPS, 70.25 MiB/s [2024-12-13T18:28:30.074Z] 19:28:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 504870 00:33:55.696 19:28:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:56.635 16042.67 IOPS, 62.67 MiB/s [2024-12-13T18:28:31.013Z] [2024-12-13 19:28:30.765956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.765994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:123512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:123536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:123552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:123568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:123576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:123592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:123608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:123616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.635 [2024-12-13 19:28:30.766630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.635 [2024-12-13 19:28:30.766638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.766648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:123656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.766657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.766667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:123664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.766676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.766687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.766696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.766706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:123680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.766715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.766725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.766734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.766744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:123696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.766753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.766764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:123704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.766773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.766785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.766793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.766804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.766812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.766823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:123728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.766831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.766841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.766851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.766861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:123744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.766870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.766880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.766889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.766899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:123760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.766908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.766919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:123768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.766928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.766938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.766947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.766957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:123784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.766966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.766977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:123792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.766986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.766998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:123800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.767006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.767018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:123808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.767027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.767038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.767050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.767061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:123824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.767069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.767080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:123832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.767089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.767099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:123840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.767107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.767118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:123848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.767126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.767136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.767146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.767156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:123864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.767165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.767175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.767184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.767194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:123880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.767204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.767215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:123888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.767224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.767234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.636 [2024-12-13 19:28:30.767243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.767253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x181700 00:33:56.636 [2024-12-13 19:28:30.767265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.767276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:122888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x181700 00:33:56.636 [2024-12-13 19:28:30.767286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.767296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x181700 00:33:56.636 [2024-12-13 19:28:30.767305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.767315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:122904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x181700 00:33:56.636 [2024-12-13 19:28:30.767324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.767335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x181700 00:33:56.636 [2024-12-13 19:28:30.767343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.767354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:122920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x181700 00:33:56.636 [2024-12-13 19:28:30.767363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.767374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:122928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x181700 00:33:56.636 [2024-12-13 19:28:30.767383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.636 [2024-12-13 19:28:30.767394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x181700 00:33:56.636 [2024-12-13 19:28:30.767402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:122944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:122952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:122976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:123000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:123008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:123016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:123024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:123032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:123040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:123056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:123064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:123072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:123080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:123088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:123096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:123104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:123112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:123120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:123128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:123136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:123152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.767988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:123176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.767997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.768008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.768017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.768028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:123192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.768037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.768052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:123200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.768061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.768072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.768080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.768091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:123216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.768100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.768122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x181700 00:33:56.637 [2024-12-13 19:28:30.768130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.637 [2024-12-13 19:28:30.768140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:123232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x181700 00:33:56.638 [2024-12-13 19:28:30.768149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.638 [2024-12-13 19:28:30.768161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:123240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x181700 00:33:56.638 [2024-12-13 19:28:30.768170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.638 [2024-12-13 19:28:30.768180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:123248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x181700 00:33:56.638 [2024-12-13 19:28:30.768189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.638 [2024-12-13 19:28:30.768199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:123256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x181700 00:33:56.638 [2024-12-13 19:28:30.768208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.638 [2024-12-13 19:28:30.768218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:123264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x181700 00:33:56.638 [2024-12-13 19:28:30.768227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.638 [2024-12-13 19:28:30.768238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:123272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x181700 00:33:56.638 [2024-12-13 19:28:30.768247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.638 [2024-12-13 19:28:30.768257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:123280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x181700 00:33:56.638 [2024-12-13 19:28:30.768266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.638 [2024-12-13 19:28:30.768276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x181700 00:33:56.638 [2024-12-13 19:28:30.768285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.638 [2024-12-13 19:28:30.768295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:123296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x181700 00:33:56.638 [2024-12-13 19:28:30.768304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.638 [2024-12-13 19:28:30.768314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:123304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x181700 00:33:56.638 [2024-12-13 19:28:30.768322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.638 [2024-12-13 19:28:30.768332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:123312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x181700 00:33:56.638 [2024-12-13 19:28:30.768341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.638 [2024-12-13 19:28:30.768351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:123320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x181700 00:33:56.638 [2024-12-13 19:28:30.768360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.638 [2024-12-13 19:28:30.768371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:123328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x181700 00:33:56.638 [2024-12-13 19:28:30.768380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.638 [2024-12-13 19:28:30.768390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x181700 00:33:56.638 [2024-12-13 19:28:30.768399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.638 [2024-12-13 19:28:30.768410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:123344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x181700 00:33:56.638 [2024-12-13 19:28:30.768418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.638 [2024-12-13 19:28:30.768428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x181700 00:33:56.638 [2024-12-13 19:28:30.768437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.638 [2024-12-13 19:28:30.768447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x181700 00:33:56.638 [2024-12-13 19:28:30.768455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.638 [2024-12-13 19:28:30.768465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:123368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x181700 00:33:56.638 [2024-12-13 19:28:30.768474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.638 [2024-12-13 19:28:30.768485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:123376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x181700 00:33:56.638 [2024-12-13 19:28:30.768493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.638 [2024-12-13 19:28:30.768503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x181700 00:33:56.638 [2024-12-13 19:28:30.768512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.638 [2024-12-13 19:28:30.768522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:123392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x181700 00:33:56.638 [2024-12-13 19:28:30.768532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:18862 cdw0:64e5e000 sqhd:7ae0 p:1 m:0 dnr:0 00:33:56.638 [2024-12-13 19:28:30.770452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:56.638 [2024-12-13 19:28:30.770491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:56.638 [2024-12-13 19:28:30.770521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123400 len:8 PRP1 0x0 PRP2 0x0 00:33:56.638 [2024-12-13 19:28:30.770552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.638 [2024-12-13 19:28:30.773670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:56.638 [2024-12-13 19:28:30.799261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:33:56.638 [2024-12-13 19:28:30.802839] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:56.638 [2024-12-13 19:28:30.802862] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:56.638 [2024-12-13 19:28:30.802870] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:33:57.465 12032.00 IOPS, 47.00 MiB/s [2024-12-13T18:28:31.843Z] [2024-12-13 19:28:31.807068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:33:57.465 [2024-12-13 19:28:31.807130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:57.465 [2024-12-13 19:28:31.807670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:57.465 [2024-12-13 19:28:31.807695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:57.465 [2024-12-13 19:28:31.807716] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:33:57.465 [2024-12-13 19:28:31.807739] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:57.465 [2024-12-13 19:28:31.814802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:57.465 [2024-12-13 19:28:31.819191] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:57.465 [2024-12-13 19:28:31.819249] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:57.465 [2024-12-13 19:28:31.819277] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:33:58.661 9625.60 IOPS, 37.60 MiB/s [2024-12-13T18:28:33.039Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 504870 Killed "${NVMF_APP[@]}" "$@" 00:33:58.661 19:28:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:58.661 19:28:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:58.661 19:28:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:58.661 19:28:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:58.661 19:28:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:58.661 19:28:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=506226 00:33:58.661 19:28:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:58.661 19:28:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 506226 00:33:58.661 19:28:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 506226 ']' 00:33:58.661 19:28:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:58.661 19:28:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:58.661 19:28:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:58.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:58.661 19:28:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:58.661 19:28:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:58.661 [2024-12-13 19:28:32.798803] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:33:58.661 [2024-12-13 19:28:32.798849] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:58.661 [2024-12-13 19:28:32.823522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:33:58.661 [2024-12-13 19:28:32.823549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.661 [2024-12-13 19:28:32.823724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.661 [2024-12-13 19:28:32.823737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.661 [2024-12-13 19:28:32.823747] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:33:58.661 [2024-12-13 19:28:32.823760] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.661 [2024-12-13 19:28:32.826557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.661 [2024-12-13 19:28:32.829169] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:58.661 [2024-12-13 19:28:32.829190] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:58.661 [2024-12-13 19:28:32.829199] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:33:58.661 [2024-12-13 19:28:32.891385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:58.661 [2024-12-13 19:28:32.913044] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:58.661 [2024-12-13 19:28:32.913080] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:58.661 [2024-12-13 19:28:32.913089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:58.661 [2024-12-13 19:28:32.913097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:58.661 [2024-12-13 19:28:32.913104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:58.661 [2024-12-13 19:28:32.914435] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:33:58.661 [2024-12-13 19:28:32.914548] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:58.661 [2024-12-13 19:28:32.914550] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:33:58.661 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:58.661 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:33:58.661 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:58.661 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:58.661 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:58.920 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:58.920 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:33:58.920 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.920 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:58.920 8021.33 IOPS, 31.33 MiB/s [2024-12-13T18:28:33.298Z] [2024-12-13 19:28:33.076537] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x127dc40/0x12820f0) succeed. 00:33:58.920 [2024-12-13 19:28:33.085623] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x127f1e0/0x12c3790) succeed. 00:33:58.920 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.920 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:58.920 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.920 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:58.920 Malloc0 00:33:58.920 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.920 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:58.920 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.920 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:58.920 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.920 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:58.920 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.920 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:58.920 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.920 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:58.920 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.920 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:58.920 [2024-12-13 19:28:33.231326] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:58.920 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.920 19:28:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 505178 00:33:59.488 [2024-12-13 19:28:33.833242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:33:59.488 [2024-12-13 19:28:33.833271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.488 [2024-12-13 19:28:33.833448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.488 [2024-12-13 19:28:33.833460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.488 [2024-12-13 19:28:33.833471] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:33:59.488 [2024-12-13 19:28:33.833483] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.488 [2024-12-13 19:28:33.840602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.747 [2024-12-13 19:28:33.881548] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:34:01.125 7366.86 IOPS, 28.78 MiB/s [2024-12-13T18:28:36.439Z] 8712.12 IOPS, 34.03 MiB/s [2024-12-13T18:28:37.376Z] 9757.33 IOPS, 38.11 MiB/s [2024-12-13T18:28:38.312Z] 10595.20 IOPS, 41.39 MiB/s [2024-12-13T18:28:39.249Z] 11277.36 IOPS, 44.05 MiB/s [2024-12-13T18:28:40.187Z] 11850.00 IOPS, 46.29 MiB/s [2024-12-13T18:28:41.124Z] 12332.31 IOPS, 48.17 MiB/s [2024-12-13T18:28:42.508Z] 12745.79 IOPS, 49.79 MiB/s [2024-12-13T18:28:42.508Z] 13105.07 IOPS, 51.19 MiB/s 00:34:08.130 Latency(us) 00:34:08.130 [2024-12-13T18:28:42.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:08.130 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:08.130 Verification LBA range: start 0x0 length 0x4000 00:34:08.130 Nvme1n1 : 15.00 13106.77 51.20 10528.83 0.00 5396.25 347.34 1040187.39 00:34:08.130 [2024-12-13T18:28:42.508Z] =================================================================================================================== 00:34:08.130 [2024-12-13T18:28:42.508Z] Total : 13106.77 51.20 10528.83 0.00 5396.25 347.34 1040187.39 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:34:08.130 rmmod nvme_rdma 00:34:08.130 rmmod nvme_fabrics 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 506226 ']' 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 506226 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 506226 ']' 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 506226 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 506226 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 506226' 00:34:08.130 killing process with pid 506226 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 506226 00:34:08.130 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 506226 00:34:08.389 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:08.389 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:34:08.389 00:34:08.389 real 0m25.410s 00:34:08.389 user 1m2.364s 00:34:08.389 sys 0m6.759s 00:34:08.389 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:08.389 19:28:42 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:08.389 ************************************ 00:34:08.389 END TEST nvmf_bdevperf 00:34:08.389 ************************************ 00:34:08.389 19:28:42 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:34:08.389 19:28:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:08.389 19:28:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:08.389 19:28:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.389 ************************************ 00:34:08.389 START TEST nvmf_target_disconnect 00:34:08.389 ************************************ 00:34:08.389 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:34:08.649 * Looking for test storage... 00:34:08.649 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:08.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.649 --rc genhtml_branch_coverage=1 00:34:08.649 --rc genhtml_function_coverage=1 00:34:08.649 --rc genhtml_legend=1 00:34:08.649 --rc geninfo_all_blocks=1 00:34:08.649 --rc geninfo_unexecuted_blocks=1 00:34:08.649 00:34:08.649 ' 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:08.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.649 --rc genhtml_branch_coverage=1 00:34:08.649 --rc genhtml_function_coverage=1 00:34:08.649 --rc genhtml_legend=1 00:34:08.649 --rc geninfo_all_blocks=1 00:34:08.649 --rc geninfo_unexecuted_blocks=1 00:34:08.649 00:34:08.649 ' 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:08.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.649 --rc genhtml_branch_coverage=1 00:34:08.649 --rc genhtml_function_coverage=1 00:34:08.649 --rc genhtml_legend=1 00:34:08.649 --rc geninfo_all_blocks=1 00:34:08.649 --rc geninfo_unexecuted_blocks=1 00:34:08.649 00:34:08.649 ' 00:34:08.649 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:08.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.649 --rc genhtml_branch_coverage=1 00:34:08.649 --rc genhtml_function_coverage=1 00:34:08.649 --rc genhtml_legend=1 00:34:08.649 --rc geninfo_all_blocks=1 00:34:08.649 --rc geninfo_unexecuted_blocks=1 00:34:08.649 00:34:08.650 ' 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:08.650 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:34:08.650 19:28:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:16.775 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:34:16.776 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:34:16.776 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:34:16.776 Found net devices under 0000:d9:00.0: mlx_0_0 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:34:16.776 Found net devices under 0000:d9:00.1: mlx_0_1 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:34:16.776 19:28:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:34:16.776 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:16.776 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:34:16.776 altname enp217s0f0np0 00:34:16.776 altname ens818f0np0 00:34:16.776 inet 192.168.100.8/24 scope global mlx_0_0 00:34:16.776 valid_lft forever preferred_lft forever 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:34:16.776 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:16.776 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:34:16.776 altname enp217s0f1np1 00:34:16.776 altname ens818f1np1 00:34:16.776 inet 192.168.100.9/24 scope global mlx_0_1 00:34:16.776 valid_lft forever preferred_lft forever 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:16.776 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:34:16.777 192.168.100.9' 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:34:16.777 192.168.100.9' 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:34:16.777 192.168.100.9' 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:16.777 ************************************ 00:34:16.777 START TEST nvmf_target_disconnect_tc1 00:34:16.777 ************************************ 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:34:16.777 19:28:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:34:16.777 [2024-12-13 19:28:50.395232] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:16.777 [2024-12-13 19:28:50.395316] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:16.777 [2024-12-13 19:28:50.395342] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:34:17.036 [2024-12-13 19:28:51.399383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] CQ transport error -6 (No such device or address) on qpair id 0 00:34:17.036 [2024-12-13 19:28:51.399410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] in failed state. 00:34:17.036 [2024-12-13 19:28:51.399422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] Ctrlr is in error state 00:34:17.036 [2024-12-13 19:28:51.399447] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:17.036 [2024-12-13 19:28:51.399457] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:34:17.036 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:34:17.036 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:17.036 Initializing NVMe Controllers 00:34:17.036 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:34:17.036 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:17.036 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:17.036 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:17.036 00:34:17.036 real 0m1.161s 00:34:17.036 user 0m0.885s 00:34:17.036 sys 0m0.265s 00:34:17.036 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:17.036 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:17.037 ************************************ 00:34:17.037 END TEST nvmf_target_disconnect_tc1 00:34:17.037 ************************************ 00:34:17.296 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:17.296 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:17.296 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:17.296 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:17.296 ************************************ 00:34:17.296 START TEST nvmf_target_disconnect_tc2 00:34:17.296 ************************************ 00:34:17.296 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:34:17.296 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:34:17.296 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:17.296 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:17.296 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:17.296 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.296 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=511469 00:34:17.296 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 511469 00:34:17.296 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:17.296 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 511469 ']' 00:34:17.296 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:17.296 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:17.296 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:17.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:17.296 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:17.296 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.296 [2024-12-13 19:28:51.557169] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:34:17.296 [2024-12-13 19:28:51.557220] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:17.296 [2024-12-13 19:28:51.652337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:17.555 [2024-12-13 19:28:51.674747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:17.555 [2024-12-13 19:28:51.674784] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:17.555 [2024-12-13 19:28:51.674794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:17.555 [2024-12-13 19:28:51.674803] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:17.555 [2024-12-13 19:28:51.674810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:17.555 [2024-12-13 19:28:51.676629] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:34:17.555 [2024-12-13 19:28:51.676741] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:34:17.555 [2024-12-13 19:28:51.676849] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:34:17.555 [2024-12-13 19:28:51.676850] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:34:17.555 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:17.555 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:34:17.555 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:17.555 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:17.555 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.555 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:17.555 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:17.555 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.555 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.555 Malloc0 00:34:17.555 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.555 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:34:17.555 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.555 19:28:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.555 [2024-12-13 19:28:51.888429] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb5bc80/0xb67c80) succeed. 00:34:17.555 [2024-12-13 19:28:51.898076] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb5d2c0/0xba9320) succeed. 00:34:17.815 19:28:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.815 19:28:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:17.815 19:28:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.815 19:28:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.815 19:28:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.815 19:28:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:17.815 19:28:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.815 19:28:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.815 19:28:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.815 19:28:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:17.815 19:28:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.815 19:28:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.815 [2024-12-13 19:28:52.042182] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:17.815 19:28:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.815 19:28:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:34:17.815 19:28:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.815 19:28:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.815 19:28:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.815 19:28:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=511575 00:34:17.815 19:28:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:17.815 19:28:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:34:19.721 19:28:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 511469 00:34:19.721 19:28:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:21.098 Read completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Write completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Read completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Read completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Read completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Write completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Read completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Read completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Write completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Write completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Write completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Read completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Read completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Read completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Read completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Read completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Read completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Read completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Read completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Read completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Read completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Read completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Read completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Write completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Write completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Read completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Read completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Read completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Write completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Write completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Write completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 Read completed with error (sct=0, sc=8) 00:34:21.098 starting I/O failed 00:34:21.098 [2024-12-13 19:28:55.250293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.036 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 511469 Killed "${NVMF_APP[@]}" "$@" 00:34:22.036 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:34:22.036 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:22.036 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:22.036 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:22.036 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.036 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=512165 00:34:22.036 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 512165 00:34:22.036 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:22.036 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 512165 ']' 00:34:22.036 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.036 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:22.036 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.036 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:22.036 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.036 [2024-12-13 19:28:56.127689] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:34:22.036 [2024-12-13 19:28:56.127740] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:22.036 [2024-12-13 19:28:56.209121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:22.036 [2024-12-13 19:28:56.231814] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:22.036 [2024-12-13 19:28:56.231857] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:22.036 [2024-12-13 19:28:56.231871] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:22.036 [2024-12-13 19:28:56.231883] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:22.036 [2024-12-13 19:28:56.231893] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:22.036 [2024-12-13 19:28:56.234194] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:34:22.036 [2024-12-13 19:28:56.234292] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:34:22.036 [2024-12-13 19:28:56.234401] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:34:22.036 [2024-12-13 19:28:56.234402] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:34:22.036 Write completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Read completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Read completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Read completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Write completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Write completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Read completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Read completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Read completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Write completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Read completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Write completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Write completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Write completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Read completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Write completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Write completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Write completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Write completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Write completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Read completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Read completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Read completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Read completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Write completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Write completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Read completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Read completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Read completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Write completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Read completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 Read completed with error (sct=0, sc=8) 00:34:22.036 starting I/O failed 00:34:22.036 [2024-12-13 19:28:56.255387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.036 [2024-12-13 19:28:56.257221] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:22.037 [2024-12-13 19:28:56.257247] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:22.037 [2024-12-13 19:28:56.257256] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:22.037 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:22.037 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:34:22.037 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:22.037 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:22.037 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.037 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:22.037 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:22.037 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.037 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.037 Malloc0 00:34:22.037 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.037 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:34:22.037 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.037 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.296 [2024-12-13 19:28:56.434774] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe2cc80/0xe38c80) succeed. 00:34:22.296 [2024-12-13 19:28:56.444563] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe2e2c0/0xe7a320) succeed. 00:34:22.296 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.296 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:22.296 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.296 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.296 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.296 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:22.296 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.296 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.296 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.296 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:22.296 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.296 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.296 [2024-12-13 19:28:56.587565] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:22.296 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.296 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:34:22.296 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.296 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.296 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.296 19:28:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 511575 00:34:23.235 [2024-12-13 19:28:57.261236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-12-13 19:28:57.273731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.235 [2024-12-13 19:28:57.273786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.235 [2024-12-13 19:28:57.273807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.235 [2024-12-13 19:28:57.273823] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.235 [2024-12-13 19:28:57.273832] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.235 [2024-12-13 19:28:57.283877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-12-13 19:28:57.293568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.235 [2024-12-13 19:28:57.293616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.235 [2024-12-13 19:28:57.293634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.235 [2024-12-13 19:28:57.293644] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.235 [2024-12-13 19:28:57.293652] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.235 [2024-12-13 19:28:57.303642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-12-13 19:28:57.313556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.235 [2024-12-13 19:28:57.313598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.235 [2024-12-13 19:28:57.313615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.235 [2024-12-13 19:28:57.313624] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.235 [2024-12-13 19:28:57.313633] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.235 [2024-12-13 19:28:57.323813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-12-13 19:28:57.333736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.235 [2024-12-13 19:28:57.333779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.235 [2024-12-13 19:28:57.333796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.235 [2024-12-13 19:28:57.333805] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.235 [2024-12-13 19:28:57.333813] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.235 [2024-12-13 19:28:57.343994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-12-13 19:28:57.353759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.235 [2024-12-13 19:28:57.353805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.235 [2024-12-13 19:28:57.353821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.235 [2024-12-13 19:28:57.353830] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.235 [2024-12-13 19:28:57.353839] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.235 [2024-12-13 19:28:57.363847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-12-13 19:28:57.373771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.235 [2024-12-13 19:28:57.373807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.235 [2024-12-13 19:28:57.373824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.235 [2024-12-13 19:28:57.373833] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.235 [2024-12-13 19:28:57.373842] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.235 [2024-12-13 19:28:57.383744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-12-13 19:28:57.393814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.235 [2024-12-13 19:28:57.393856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.235 [2024-12-13 19:28:57.393873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.235 [2024-12-13 19:28:57.393882] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.235 [2024-12-13 19:28:57.393891] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.235 [2024-12-13 19:28:57.403904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-12-13 19:28:57.413904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.235 [2024-12-13 19:28:57.413945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.235 [2024-12-13 19:28:57.413962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.235 [2024-12-13 19:28:57.413971] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.235 [2024-12-13 19:28:57.413980] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.235 [2024-12-13 19:28:57.424065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-12-13 19:28:57.433934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.235 [2024-12-13 19:28:57.433979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.235 [2024-12-13 19:28:57.433996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.235 [2024-12-13 19:28:57.434005] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.235 [2024-12-13 19:28:57.434014] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.235 [2024-12-13 19:28:57.444192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-12-13 19:28:57.453966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.235 [2024-12-13 19:28:57.454003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.235 [2024-12-13 19:28:57.454020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.235 [2024-12-13 19:28:57.454030] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.235 [2024-12-13 19:28:57.454038] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.235 [2024-12-13 19:28:57.464011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-12-13 19:28:57.474017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.235 [2024-12-13 19:28:57.474059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.235 [2024-12-13 19:28:57.474082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.235 [2024-12-13 19:28:57.474091] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.235 [2024-12-13 19:28:57.474100] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.235 [2024-12-13 19:28:57.484315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-12-13 19:28:57.494072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.235 [2024-12-13 19:28:57.494113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.235 [2024-12-13 19:28:57.494131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.235 [2024-12-13 19:28:57.494140] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.235 [2024-12-13 19:28:57.494148] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.235 [2024-12-13 19:28:57.504169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-12-13 19:28:57.514156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.235 [2024-12-13 19:28:57.514201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.235 [2024-12-13 19:28:57.514218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.236 [2024-12-13 19:28:57.514227] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.236 [2024-12-13 19:28:57.514236] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.236 [2024-12-13 19:28:57.524336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-12-13 19:28:57.534264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.236 [2024-12-13 19:28:57.534310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.236 [2024-12-13 19:28:57.534327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.236 [2024-12-13 19:28:57.534336] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.236 [2024-12-13 19:28:57.534345] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.236 [2024-12-13 19:28:57.544416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-12-13 19:28:57.554298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.236 [2024-12-13 19:28:57.554340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.236 [2024-12-13 19:28:57.554357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.236 [2024-12-13 19:28:57.554370] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.236 [2024-12-13 19:28:57.554378] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.236 [2024-12-13 19:28:57.564514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-12-13 19:28:57.574337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.236 [2024-12-13 19:28:57.574378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.236 [2024-12-13 19:28:57.574395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.236 [2024-12-13 19:28:57.574404] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.236 [2024-12-13 19:28:57.574412] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.236 [2024-12-13 19:28:57.584595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-12-13 19:28:57.594360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.236 [2024-12-13 19:28:57.594405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.236 [2024-12-13 19:28:57.594422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.236 [2024-12-13 19:28:57.594432] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.236 [2024-12-13 19:28:57.594440] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.236 [2024-12-13 19:28:57.604533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.496 [2024-12-13 19:28:57.614375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.496 [2024-12-13 19:28:57.614419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.496 [2024-12-13 19:28:57.614436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.496 [2024-12-13 19:28:57.614446] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.496 [2024-12-13 19:28:57.614455] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.496 [2024-12-13 19:28:57.624620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-12-13 19:28:57.634458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.496 [2024-12-13 19:28:57.634499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.496 [2024-12-13 19:28:57.634517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.496 [2024-12-13 19:28:57.634526] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.496 [2024-12-13 19:28:57.634535] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.496 [2024-12-13 19:28:57.644680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-12-13 19:28:57.654567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.496 [2024-12-13 19:28:57.654608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.496 [2024-12-13 19:28:57.654625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.496 [2024-12-13 19:28:57.654634] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.496 [2024-12-13 19:28:57.654643] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.496 [2024-12-13 19:28:57.664729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-12-13 19:28:57.674561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.496 [2024-12-13 19:28:57.674603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.496 [2024-12-13 19:28:57.674620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.496 [2024-12-13 19:28:57.674629] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.496 [2024-12-13 19:28:57.674638] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.496 [2024-12-13 19:28:57.684721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-12-13 19:28:57.694704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.496 [2024-12-13 19:28:57.694744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.496 [2024-12-13 19:28:57.694761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.496 [2024-12-13 19:28:57.694770] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.496 [2024-12-13 19:28:57.694779] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.496 [2024-12-13 19:28:57.704790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-12-13 19:28:57.714719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.496 [2024-12-13 19:28:57.714761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.496 [2024-12-13 19:28:57.714778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.496 [2024-12-13 19:28:57.714787] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.496 [2024-12-13 19:28:57.714796] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.496 [2024-12-13 19:28:57.724751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-12-13 19:28:57.734689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.496 [2024-12-13 19:28:57.734727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.496 [2024-12-13 19:28:57.734745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.496 [2024-12-13 19:28:57.734754] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.496 [2024-12-13 19:28:57.734763] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.496 [2024-12-13 19:28:57.744938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.496 [2024-12-13 19:28:57.754747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.496 [2024-12-13 19:28:57.754787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.496 [2024-12-13 19:28:57.754804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.496 [2024-12-13 19:28:57.754813] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.496 [2024-12-13 19:28:57.754821] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.496 [2024-12-13 19:28:57.765033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.496 qpair failed and we were unable to recover it. 00:34:23.497 [2024-12-13 19:28:57.774666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.497 [2024-12-13 19:28:57.774709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.497 [2024-12-13 19:28:57.774726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.497 [2024-12-13 19:28:57.774735] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.497 [2024-12-13 19:28:57.774744] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.497 [2024-12-13 19:28:57.785163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-12-13 19:28:57.794843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.497 [2024-12-13 19:28:57.794885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.497 [2024-12-13 19:28:57.794902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.497 [2024-12-13 19:28:57.794911] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.497 [2024-12-13 19:28:57.794919] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.497 [2024-12-13 19:28:57.805065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-12-13 19:28:57.814965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.497 [2024-12-13 19:28:57.815008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.497 [2024-12-13 19:28:57.815029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.497 [2024-12-13 19:28:57.815038] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.497 [2024-12-13 19:28:57.815053] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.497 [2024-12-13 19:28:57.825261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-12-13 19:28:57.834962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.497 [2024-12-13 19:28:57.834999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.497 [2024-12-13 19:28:57.835016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.497 [2024-12-13 19:28:57.835025] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.497 [2024-12-13 19:28:57.835033] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.497 [2024-12-13 19:28:57.844925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.497 [2024-12-13 19:28:57.854991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.497 [2024-12-13 19:28:57.855030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.497 [2024-12-13 19:28:57.855054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.497 [2024-12-13 19:28:57.855063] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.497 [2024-12-13 19:28:57.855071] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.497 [2024-12-13 19:28:57.865450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.497 qpair failed and we were unable to recover it. 00:34:23.757 [2024-12-13 19:28:57.875087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.757 [2024-12-13 19:28:57.875124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.757 [2024-12-13 19:28:57.875141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.757 [2024-12-13 19:28:57.875150] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.757 [2024-12-13 19:28:57.875159] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.757 [2024-12-13 19:28:57.885417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.757 qpair failed and we were unable to recover it. 00:34:23.757 [2024-12-13 19:28:57.895122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.757 [2024-12-13 19:28:57.895164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.757 [2024-12-13 19:28:57.895180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.757 [2024-12-13 19:28:57.895194] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.757 [2024-12-13 19:28:57.895202] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.757 [2024-12-13 19:28:57.905376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.757 qpair failed and we were unable to recover it. 00:34:23.757 [2024-12-13 19:28:57.915236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.757 [2024-12-13 19:28:57.915278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.757 [2024-12-13 19:28:57.915295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.757 [2024-12-13 19:28:57.915304] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.757 [2024-12-13 19:28:57.915313] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.757 [2024-12-13 19:28:57.925545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.757 qpair failed and we were unable to recover it. 00:34:23.757 [2024-12-13 19:28:57.935255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.757 [2024-12-13 19:28:57.935300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.757 [2024-12-13 19:28:57.935317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.757 [2024-12-13 19:28:57.935326] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.757 [2024-12-13 19:28:57.935335] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.757 [2024-12-13 19:28:57.945501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.757 qpair failed and we were unable to recover it. 00:34:23.757 [2024-12-13 19:28:57.955361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.757 [2024-12-13 19:28:57.955409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.757 [2024-12-13 19:28:57.955426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.757 [2024-12-13 19:28:57.955435] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.757 [2024-12-13 19:28:57.955443] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.757 [2024-12-13 19:28:57.965675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.757 qpair failed and we were unable to recover it. 00:34:23.757 [2024-12-13 19:28:57.975516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.757 [2024-12-13 19:28:57.975560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.757 [2024-12-13 19:28:57.975577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.757 [2024-12-13 19:28:57.975586] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.757 [2024-12-13 19:28:57.975594] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.757 [2024-12-13 19:28:57.985843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.757 qpair failed and we were unable to recover it. 00:34:23.757 [2024-12-13 19:28:57.995498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.757 [2024-12-13 19:28:57.995543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.757 [2024-12-13 19:28:57.995560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.757 [2024-12-13 19:28:57.995569] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.757 [2024-12-13 19:28:57.995578] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.757 [2024-12-13 19:28:58.005795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.757 qpair failed and we were unable to recover it. 00:34:23.757 [2024-12-13 19:28:58.015563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.757 [2024-12-13 19:28:58.015600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.757 [2024-12-13 19:28:58.015617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.757 [2024-12-13 19:28:58.015626] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.757 [2024-12-13 19:28:58.015635] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.757 [2024-12-13 19:28:58.025929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.757 qpair failed and we were unable to recover it. 00:34:23.757 [2024-12-13 19:28:58.035598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.757 [2024-12-13 19:28:58.035641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.757 [2024-12-13 19:28:58.035658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.757 [2024-12-13 19:28:58.035667] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.757 [2024-12-13 19:28:58.035675] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.757 [2024-12-13 19:28:58.045955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.757 qpair failed and we were unable to recover it. 00:34:23.757 [2024-12-13 19:28:58.055641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.758 [2024-12-13 19:28:58.055684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.758 [2024-12-13 19:28:58.055700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.758 [2024-12-13 19:28:58.055709] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.758 [2024-12-13 19:28:58.055718] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.758 [2024-12-13 19:28:58.065983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.758 qpair failed and we were unable to recover it. 00:34:23.758 [2024-12-13 19:28:58.075750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.758 [2024-12-13 19:28:58.075793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.758 [2024-12-13 19:28:58.075810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.758 [2024-12-13 19:28:58.075820] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.758 [2024-12-13 19:28:58.075828] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.758 [2024-12-13 19:28:58.086136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.758 qpair failed and we were unable to recover it. 00:34:23.758 [2024-12-13 19:28:58.095828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.758 [2024-12-13 19:28:58.095873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.758 [2024-12-13 19:28:58.095890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.758 [2024-12-13 19:28:58.095899] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.758 [2024-12-13 19:28:58.095908] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.758 [2024-12-13 19:28:58.106151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.758 qpair failed and we were unable to recover it. 00:34:23.758 [2024-12-13 19:28:58.115869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.758 [2024-12-13 19:28:58.115908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.758 [2024-12-13 19:28:58.115925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.758 [2024-12-13 19:28:58.115933] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.758 [2024-12-13 19:28:58.115942] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:23.758 [2024-12-13 19:28:58.126247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:23.758 qpair failed and we were unable to recover it. 00:34:24.018 [2024-12-13 19:28:58.135948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.018 [2024-12-13 19:28:58.135991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.018 [2024-12-13 19:28:58.136007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.018 [2024-12-13 19:28:58.136016] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.018 [2024-12-13 19:28:58.136025] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.018 [2024-12-13 19:28:58.146264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-12-13 19:28:58.156013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.018 [2024-12-13 19:28:58.156058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.018 [2024-12-13 19:28:58.156078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.018 [2024-12-13 19:28:58.156087] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.018 [2024-12-13 19:28:58.156096] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.018 [2024-12-13 19:28:58.166213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-12-13 19:28:58.176033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.018 [2024-12-13 19:28:58.176084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.018 [2024-12-13 19:28:58.176101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.018 [2024-12-13 19:28:58.176110] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.018 [2024-12-13 19:28:58.176119] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.018 [2024-12-13 19:28:58.186400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-12-13 19:28:58.196026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.018 [2024-12-13 19:28:58.196070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.018 [2024-12-13 19:28:58.196087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.018 [2024-12-13 19:28:58.196096] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.018 [2024-12-13 19:28:58.196105] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.018 [2024-12-13 19:28:58.206413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-12-13 19:28:58.216099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.018 [2024-12-13 19:28:58.216139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.018 [2024-12-13 19:28:58.216155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.018 [2024-12-13 19:28:58.216165] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.018 [2024-12-13 19:28:58.216173] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.018 [2024-12-13 19:28:58.226494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-12-13 19:28:58.236313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.018 [2024-12-13 19:28:58.236357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.018 [2024-12-13 19:28:58.236375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.018 [2024-12-13 19:28:58.236384] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.018 [2024-12-13 19:28:58.236395] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.018 [2024-12-13 19:28:58.246482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-12-13 19:28:58.256318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.018 [2024-12-13 19:28:58.256360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.018 [2024-12-13 19:28:58.256376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.018 [2024-12-13 19:28:58.256385] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.018 [2024-12-13 19:28:58.256394] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.018 [2024-12-13 19:28:58.266421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-12-13 19:28:58.276308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.018 [2024-12-13 19:28:58.276351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.018 [2024-12-13 19:28:58.276369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.018 [2024-12-13 19:28:58.276378] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.018 [2024-12-13 19:28:58.276387] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.018 [2024-12-13 19:28:58.286796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-12-13 19:28:58.296437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.018 [2024-12-13 19:28:58.296478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.018 [2024-12-13 19:28:58.296495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.018 [2024-12-13 19:28:58.296504] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.018 [2024-12-13 19:28:58.296513] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.018 [2024-12-13 19:28:58.306733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-12-13 19:28:58.316593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.018 [2024-12-13 19:28:58.316634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.018 [2024-12-13 19:28:58.316651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.018 [2024-12-13 19:28:58.316660] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.018 [2024-12-13 19:28:58.316668] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.018 [2024-12-13 19:28:58.326810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.018 qpair failed and we were unable to recover it. 00:34:24.018 [2024-12-13 19:28:58.336529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.018 [2024-12-13 19:28:58.336568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.019 [2024-12-13 19:28:58.336585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.019 [2024-12-13 19:28:58.336594] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.019 [2024-12-13 19:28:58.336602] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.019 [2024-12-13 19:28:58.346885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-12-13 19:28:58.356643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.019 [2024-12-13 19:28:58.356686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.019 [2024-12-13 19:28:58.356703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.019 [2024-12-13 19:28:58.356712] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.019 [2024-12-13 19:28:58.356720] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.019 [2024-12-13 19:28:58.366949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.019 [2024-12-13 19:28:58.376587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.019 [2024-12-13 19:28:58.376628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.019 [2024-12-13 19:28:58.376645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.019 [2024-12-13 19:28:58.376654] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.019 [2024-12-13 19:28:58.376662] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.019 [2024-12-13 19:28:58.386971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.019 qpair failed and we were unable to recover it. 00:34:24.279 [2024-12-13 19:28:58.396744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.279 [2024-12-13 19:28:58.396793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.279 [2024-12-13 19:28:58.396810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.279 [2024-12-13 19:28:58.396819] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.279 [2024-12-13 19:28:58.396827] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.279 [2024-12-13 19:28:58.407103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.279 qpair failed and we were unable to recover it. 00:34:24.279 [2024-12-13 19:28:58.416805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.279 [2024-12-13 19:28:58.416845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.279 [2024-12-13 19:28:58.416862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.279 [2024-12-13 19:28:58.416871] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.279 [2024-12-13 19:28:58.416879] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.279 [2024-12-13 19:28:58.427094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.279 qpair failed and we were unable to recover it. 00:34:24.279 [2024-12-13 19:28:58.436813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.279 [2024-12-13 19:28:58.436853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.279 [2024-12-13 19:28:58.436870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.279 [2024-12-13 19:28:58.436879] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.279 [2024-12-13 19:28:58.436887] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.279 [2024-12-13 19:28:58.447147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.279 qpair failed and we were unable to recover it. 00:34:24.279 [2024-12-13 19:28:58.456840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.279 [2024-12-13 19:28:58.456880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.279 [2024-12-13 19:28:58.456897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.279 [2024-12-13 19:28:58.456906] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.279 [2024-12-13 19:28:58.456914] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.279 [2024-12-13 19:28:58.467156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.279 qpair failed and we were unable to recover it. 00:34:24.279 [2024-12-13 19:28:58.476973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.279 [2024-12-13 19:28:58.477018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.279 [2024-12-13 19:28:58.477035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.279 [2024-12-13 19:28:58.477048] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.279 [2024-12-13 19:28:58.477056] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.279 [2024-12-13 19:28:58.487292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.279 qpair failed and we were unable to recover it. 00:34:24.279 [2024-12-13 19:28:58.496990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.279 [2024-12-13 19:28:58.497027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.279 [2024-12-13 19:28:58.497051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.279 [2024-12-13 19:28:58.497060] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.279 [2024-12-13 19:28:58.497069] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.279 [2024-12-13 19:28:58.507385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.279 qpair failed and we were unable to recover it. 00:34:24.279 [2024-12-13 19:28:58.517017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.279 [2024-12-13 19:28:58.517064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.279 [2024-12-13 19:28:58.517080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.279 [2024-12-13 19:28:58.517089] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.279 [2024-12-13 19:28:58.517098] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.279 [2024-12-13 19:28:58.527483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.279 qpair failed and we were unable to recover it. 00:34:24.279 [2024-12-13 19:28:58.537084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.279 [2024-12-13 19:28:58.537127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.279 [2024-12-13 19:28:58.537144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.279 [2024-12-13 19:28:58.537153] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.279 [2024-12-13 19:28:58.537162] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.279 [2024-12-13 19:28:58.547440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.279 qpair failed and we were unable to recover it. 00:34:24.279 [2024-12-13 19:28:58.557136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.279 [2024-12-13 19:28:58.557174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.279 [2024-12-13 19:28:58.557191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.279 [2024-12-13 19:28:58.557200] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.279 [2024-12-13 19:28:58.557208] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.279 [2024-12-13 19:28:58.567392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.279 qpair failed and we were unable to recover it. 00:34:24.279 [2024-12-13 19:28:58.577177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.279 [2024-12-13 19:28:58.577218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.279 [2024-12-13 19:28:58.577235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.279 [2024-12-13 19:28:58.577244] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.279 [2024-12-13 19:28:58.577256] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.279 [2024-12-13 19:28:58.587649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.279 qpair failed and we were unable to recover it. 00:34:24.279 [2024-12-13 19:28:58.597321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.279 [2024-12-13 19:28:58.597356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.280 [2024-12-13 19:28:58.597373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.280 [2024-12-13 19:28:58.597382] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.280 [2024-12-13 19:28:58.597390] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.280 [2024-12-13 19:28:58.607650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.280 qpair failed and we were unable to recover it. 00:34:24.280 [2024-12-13 19:28:58.617408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.280 [2024-12-13 19:28:58.617447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.280 [2024-12-13 19:28:58.617464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.280 [2024-12-13 19:28:58.617473] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.280 [2024-12-13 19:28:58.617482] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.280 [2024-12-13 19:28:58.627697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.280 qpair failed and we were unable to recover it. 00:34:24.280 [2024-12-13 19:28:58.637432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.280 [2024-12-13 19:28:58.637477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.280 [2024-12-13 19:28:58.637494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.280 [2024-12-13 19:28:58.637503] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.280 [2024-12-13 19:28:58.637511] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.280 [2024-12-13 19:28:58.647646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.280 qpair failed and we were unable to recover it. 00:34:24.540 [2024-12-13 19:28:58.657511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.540 [2024-12-13 19:28:58.657547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.540 [2024-12-13 19:28:58.657564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.540 [2024-12-13 19:28:58.657574] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.540 [2024-12-13 19:28:58.657582] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.540 [2024-12-13 19:28:58.667816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-12-13 19:28:58.677596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.540 [2024-12-13 19:28:58.677636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.540 [2024-12-13 19:28:58.677654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.540 [2024-12-13 19:28:58.677663] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.540 [2024-12-13 19:28:58.677671] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.540 [2024-12-13 19:28:58.687792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-12-13 19:28:58.697572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.540 [2024-12-13 19:28:58.697614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.540 [2024-12-13 19:28:58.697632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.540 [2024-12-13 19:28:58.697642] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.540 [2024-12-13 19:28:58.697651] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.540 [2024-12-13 19:28:58.707819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-12-13 19:28:58.717743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.540 [2024-12-13 19:28:58.717780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.540 [2024-12-13 19:28:58.717797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.540 [2024-12-13 19:28:58.717806] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.540 [2024-12-13 19:28:58.717814] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.540 [2024-12-13 19:28:58.727996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-12-13 19:28:58.737726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.540 [2024-12-13 19:28:58.737764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.540 [2024-12-13 19:28:58.737781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.540 [2024-12-13 19:28:58.737790] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.540 [2024-12-13 19:28:58.737798] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.540 [2024-12-13 19:28:58.748007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-12-13 19:28:58.757776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.540 [2024-12-13 19:28:58.757819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.540 [2024-12-13 19:28:58.757835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.540 [2024-12-13 19:28:58.757844] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.540 [2024-12-13 19:28:58.757853] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.540 [2024-12-13 19:28:58.767956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-12-13 19:28:58.777851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.540 [2024-12-13 19:28:58.777893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.540 [2024-12-13 19:28:58.777910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.540 [2024-12-13 19:28:58.777919] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.540 [2024-12-13 19:28:58.777927] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.540 [2024-12-13 19:28:58.788086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-12-13 19:28:58.797873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.540 [2024-12-13 19:28:58.797912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.540 [2024-12-13 19:28:58.797929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.540 [2024-12-13 19:28:58.797938] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.540 [2024-12-13 19:28:58.797947] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.540 [2024-12-13 19:28:58.808188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-12-13 19:28:58.817966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.541 [2024-12-13 19:28:58.818009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.541 [2024-12-13 19:28:58.818026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.541 [2024-12-13 19:28:58.818036] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.541 [2024-12-13 19:28:58.818049] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.541 [2024-12-13 19:28:58.827986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-12-13 19:28:58.838065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.541 [2024-12-13 19:28:58.838109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.541 [2024-12-13 19:28:58.838125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.541 [2024-12-13 19:28:58.838138] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.541 [2024-12-13 19:28:58.838146] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.541 [2024-12-13 19:28:58.848273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-12-13 19:28:58.858028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.541 [2024-12-13 19:28:58.858075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.541 [2024-12-13 19:28:58.858092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.541 [2024-12-13 19:28:58.858102] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.541 [2024-12-13 19:28:58.858111] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.541 [2024-12-13 19:28:58.868346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-12-13 19:28:58.878210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.541 [2024-12-13 19:28:58.878264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.541 [2024-12-13 19:28:58.878282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.541 [2024-12-13 19:28:58.878291] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.541 [2024-12-13 19:28:58.878300] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.541 [2024-12-13 19:28:58.888287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-12-13 19:28:58.898091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.541 [2024-12-13 19:28:58.898130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.541 [2024-12-13 19:28:58.898148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.541 [2024-12-13 19:28:58.898158] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.541 [2024-12-13 19:28:58.898166] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.541 [2024-12-13 19:28:58.908671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.801 [2024-12-13 19:28:58.918173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.801 [2024-12-13 19:28:58.918218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.801 [2024-12-13 19:28:58.918235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.801 [2024-12-13 19:28:58.918244] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.801 [2024-12-13 19:28:58.918260] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.801 [2024-12-13 19:28:58.928542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.801 qpair failed and we were unable to recover it. 00:34:24.801 [2024-12-13 19:28:58.938298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.801 [2024-12-13 19:28:58.938343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.801 [2024-12-13 19:28:58.938360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.801 [2024-12-13 19:28:58.938369] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.801 [2024-12-13 19:28:58.938378] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.801 [2024-12-13 19:28:58.948635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.801 qpair failed and we were unable to recover it. 00:34:24.801 [2024-12-13 19:28:58.958426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.801 [2024-12-13 19:28:58.958471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.801 [2024-12-13 19:28:58.958487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.801 [2024-12-13 19:28:58.958496] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.801 [2024-12-13 19:28:58.958505] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.801 [2024-12-13 19:28:58.968715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.801 qpair failed and we were unable to recover it. 00:34:24.801 [2024-12-13 19:28:58.978404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.801 [2024-12-13 19:28:58.978442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.801 [2024-12-13 19:28:58.978459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.801 [2024-12-13 19:28:58.978468] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.801 [2024-12-13 19:28:58.978477] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.801 [2024-12-13 19:28:58.988732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.801 qpair failed and we were unable to recover it. 00:34:24.801 [2024-12-13 19:28:58.998514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.801 [2024-12-13 19:28:58.998555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.801 [2024-12-13 19:28:58.998572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.801 [2024-12-13 19:28:58.998581] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.801 [2024-12-13 19:28:58.998590] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.801 [2024-12-13 19:28:59.008856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.801 qpair failed and we were unable to recover it. 00:34:24.801 [2024-12-13 19:28:59.018546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.801 [2024-12-13 19:28:59.018589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.801 [2024-12-13 19:28:59.018606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.801 [2024-12-13 19:28:59.018615] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.801 [2024-12-13 19:28:59.018623] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.801 [2024-12-13 19:28:59.028615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.801 qpair failed and we were unable to recover it. 00:34:24.801 [2024-12-13 19:28:59.038581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.801 [2024-12-13 19:28:59.038620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.801 [2024-12-13 19:28:59.038637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.801 [2024-12-13 19:28:59.038646] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.801 [2024-12-13 19:28:59.038654] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.801 [2024-12-13 19:28:59.048712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.801 qpair failed and we were unable to recover it. 00:34:24.801 [2024-12-13 19:28:59.058633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.801 [2024-12-13 19:28:59.058674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.801 [2024-12-13 19:28:59.058691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.801 [2024-12-13 19:28:59.058700] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.801 [2024-12-13 19:28:59.058708] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.801 [2024-12-13 19:28:59.068866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.801 qpair failed and we were unable to recover it. 00:34:24.801 [2024-12-13 19:28:59.078591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.801 [2024-12-13 19:28:59.078631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.801 [2024-12-13 19:28:59.078648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.801 [2024-12-13 19:28:59.078657] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.801 [2024-12-13 19:28:59.078665] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.801 [2024-12-13 19:28:59.088833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.801 qpair failed and we were unable to recover it. 00:34:24.801 [2024-12-13 19:28:59.098794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.801 [2024-12-13 19:28:59.098836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.801 [2024-12-13 19:28:59.098856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.801 [2024-12-13 19:28:59.098865] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.802 [2024-12-13 19:28:59.098874] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.802 [2024-12-13 19:28:59.108993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.802 qpair failed and we were unable to recover it. 00:34:24.802 [2024-12-13 19:28:59.118863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.802 [2024-12-13 19:28:59.118904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.802 [2024-12-13 19:28:59.118920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.802 [2024-12-13 19:28:59.118929] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.802 [2024-12-13 19:28:59.118937] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.802 [2024-12-13 19:28:59.129104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.802 qpair failed and we were unable to recover it. 00:34:24.802 [2024-12-13 19:28:59.138896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.802 [2024-12-13 19:28:59.138940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.802 [2024-12-13 19:28:59.138957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.802 [2024-12-13 19:28:59.138966] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.802 [2024-12-13 19:28:59.138974] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.802 [2024-12-13 19:28:59.148955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.802 qpair failed and we were unable to recover it. 00:34:24.802 [2024-12-13 19:28:59.158989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.802 [2024-12-13 19:28:59.159032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.802 [2024-12-13 19:28:59.159054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.802 [2024-12-13 19:28:59.159063] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.802 [2024-12-13 19:28:59.159071] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:24.802 [2024-12-13 19:28:59.169070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.802 qpair failed and we were unable to recover it. 00:34:25.062 [2024-12-13 19:28:59.179050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.062 [2024-12-13 19:28:59.179093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.062 [2024-12-13 19:28:59.179111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.062 [2024-12-13 19:28:59.179123] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.062 [2024-12-13 19:28:59.179131] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.062 [2024-12-13 19:28:59.189524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.062 qpair failed and we were unable to recover it. 00:34:25.062 [2024-12-13 19:28:59.199137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.062 [2024-12-13 19:28:59.199178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.062 [2024-12-13 19:28:59.199195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.062 [2024-12-13 19:28:59.199204] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.062 [2024-12-13 19:28:59.199212] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.062 [2024-12-13 19:28:59.209416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.062 qpair failed and we were unable to recover it. 00:34:25.062 [2024-12-13 19:28:59.219149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.062 [2024-12-13 19:28:59.219194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.062 [2024-12-13 19:28:59.219210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.062 [2024-12-13 19:28:59.219219] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.062 [2024-12-13 19:28:59.219227] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.062 [2024-12-13 19:28:59.229316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.062 qpair failed and we were unable to recover it. 00:34:25.062 [2024-12-13 19:28:59.239238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.062 [2024-12-13 19:28:59.239276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.062 [2024-12-13 19:28:59.239293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.062 [2024-12-13 19:28:59.239302] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.062 [2024-12-13 19:28:59.239311] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.062 [2024-12-13 19:28:59.249336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.062 qpair failed and we were unable to recover it. 00:34:25.062 [2024-12-13 19:28:59.259261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.062 [2024-12-13 19:28:59.259301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.062 [2024-12-13 19:28:59.259318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.062 [2024-12-13 19:28:59.259326] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.062 [2024-12-13 19:28:59.259335] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.062 [2024-12-13 19:28:59.269462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.062 qpair failed and we were unable to recover it. 00:34:25.062 [2024-12-13 19:28:59.279250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.062 [2024-12-13 19:28:59.279300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.062 [2024-12-13 19:28:59.279317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.062 [2024-12-13 19:28:59.279326] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.062 [2024-12-13 19:28:59.279335] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.062 [2024-12-13 19:28:59.289530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.062 qpair failed and we were unable to recover it. 00:34:25.062 [2024-12-13 19:28:59.299298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.062 [2024-12-13 19:28:59.299339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.062 [2024-12-13 19:28:59.299356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.062 [2024-12-13 19:28:59.299365] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.062 [2024-12-13 19:28:59.299373] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.062 [2024-12-13 19:28:59.309465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.062 qpair failed and we were unable to recover it. 00:34:25.062 [2024-12-13 19:28:59.319346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.062 [2024-12-13 19:28:59.319384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.062 [2024-12-13 19:28:59.319401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.062 [2024-12-13 19:28:59.319410] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.062 [2024-12-13 19:28:59.319419] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.062 [2024-12-13 19:28:59.329674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.062 qpair failed and we were unable to recover it. 00:34:25.062 [2024-12-13 19:28:59.339559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.062 [2024-12-13 19:28:59.339599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.062 [2024-12-13 19:28:59.339616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.062 [2024-12-13 19:28:59.339625] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.062 [2024-12-13 19:28:59.339634] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.062 [2024-12-13 19:28:59.349553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.062 qpair failed and we were unable to recover it. 00:34:25.062 [2024-12-13 19:28:59.359521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.062 [2024-12-13 19:28:59.359565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.062 [2024-12-13 19:28:59.359583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.062 [2024-12-13 19:28:59.359592] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.062 [2024-12-13 19:28:59.359600] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.062 [2024-12-13 19:28:59.369704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.062 qpair failed and we were unable to recover it. 00:34:25.062 [2024-12-13 19:28:59.379530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.062 [2024-12-13 19:28:59.379568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.062 [2024-12-13 19:28:59.379586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.062 [2024-12-13 19:28:59.379595] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.062 [2024-12-13 19:28:59.379604] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.062 [2024-12-13 19:28:59.389780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.062 qpair failed and we were unable to recover it. 00:34:25.062 [2024-12-13 19:28:59.399587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.062 [2024-12-13 19:28:59.399636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.062 [2024-12-13 19:28:59.399654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.062 [2024-12-13 19:28:59.399662] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.062 [2024-12-13 19:28:59.399671] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.062 [2024-12-13 19:28:59.409871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.062 qpair failed and we were unable to recover it. 00:34:25.063 [2024-12-13 19:28:59.419654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.063 [2024-12-13 19:28:59.419694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.063 [2024-12-13 19:28:59.419711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.063 [2024-12-13 19:28:59.419720] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.063 [2024-12-13 19:28:59.419729] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.063 [2024-12-13 19:28:59.429790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.063 qpair failed and we were unable to recover it. 00:34:25.322 [2024-12-13 19:28:59.439641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.322 [2024-12-13 19:28:59.439686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.322 [2024-12-13 19:28:59.439706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.322 [2024-12-13 19:28:59.439715] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.322 [2024-12-13 19:28:59.439723] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.322 [2024-12-13 19:28:59.449885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.322 qpair failed and we were unable to recover it. 00:34:25.322 [2024-12-13 19:28:59.459749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.322 [2024-12-13 19:28:59.459790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.322 [2024-12-13 19:28:59.459807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.322 [2024-12-13 19:28:59.459816] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.322 [2024-12-13 19:28:59.459824] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.322 [2024-12-13 19:28:59.469938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.322 qpair failed and we were unable to recover it. 00:34:25.322 [2024-12-13 19:28:59.479828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.323 [2024-12-13 19:28:59.479872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.323 [2024-12-13 19:28:59.479889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.323 [2024-12-13 19:28:59.479898] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.323 [2024-12-13 19:28:59.479906] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.323 [2024-12-13 19:28:59.490017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.323 qpair failed and we were unable to recover it. 00:34:25.323 [2024-12-13 19:28:59.499862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.323 [2024-12-13 19:28:59.499902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.323 [2024-12-13 19:28:59.499919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.323 [2024-12-13 19:28:59.499928] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.323 [2024-12-13 19:28:59.499936] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.323 [2024-12-13 19:28:59.510141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.323 qpair failed and we were unable to recover it. 00:34:25.323 [2024-12-13 19:28:59.519882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.323 [2024-12-13 19:28:59.519922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.323 [2024-12-13 19:28:59.519940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.323 [2024-12-13 19:28:59.519953] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.323 [2024-12-13 19:28:59.519961] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.323 [2024-12-13 19:28:59.530285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.323 qpair failed and we were unable to recover it. 00:34:25.323 [2024-12-13 19:28:59.540133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.323 [2024-12-13 19:28:59.540169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.323 [2024-12-13 19:28:59.540186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.323 [2024-12-13 19:28:59.540195] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.323 [2024-12-13 19:28:59.540203] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.323 [2024-12-13 19:28:59.550299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.323 qpair failed and we were unable to recover it. 00:34:25.323 [2024-12-13 19:28:59.560139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.323 [2024-12-13 19:28:59.560180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.323 [2024-12-13 19:28:59.560197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.323 [2024-12-13 19:28:59.560206] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.323 [2024-12-13 19:28:59.560214] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.323 [2024-12-13 19:28:59.570324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.323 qpair failed and we were unable to recover it. 00:34:25.323 [2024-12-13 19:28:59.580148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.323 [2024-12-13 19:28:59.580189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.323 [2024-12-13 19:28:59.580206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.323 [2024-12-13 19:28:59.580215] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.323 [2024-12-13 19:28:59.580223] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.323 [2024-12-13 19:28:59.590395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.323 qpair failed and we were unable to recover it. 00:34:25.323 [2024-12-13 19:28:59.600152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.323 [2024-12-13 19:28:59.600199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.323 [2024-12-13 19:28:59.600216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.323 [2024-12-13 19:28:59.600226] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.323 [2024-12-13 19:28:59.600234] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.323 [2024-12-13 19:28:59.610518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.323 qpair failed and we were unable to recover it. 00:34:25.323 [2024-12-13 19:28:59.620303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.323 [2024-12-13 19:28:59.620346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.323 [2024-12-13 19:28:59.620364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.323 [2024-12-13 19:28:59.620373] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.323 [2024-12-13 19:28:59.620381] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.323 [2024-12-13 19:28:59.630466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.323 qpair failed and we were unable to recover it. 00:34:25.323 [2024-12-13 19:28:59.640323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.323 [2024-12-13 19:28:59.640363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.323 [2024-12-13 19:28:59.640381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.323 [2024-12-13 19:28:59.640390] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.323 [2024-12-13 19:28:59.640398] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.323 [2024-12-13 19:28:59.650566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.323 qpair failed and we were unable to recover it. 00:34:25.323 [2024-12-13 19:28:59.660427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.323 [2024-12-13 19:28:59.660471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.323 [2024-12-13 19:28:59.660488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.323 [2024-12-13 19:28:59.660497] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.323 [2024-12-13 19:28:59.660505] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.323 [2024-12-13 19:28:59.670811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.323 qpair failed and we were unable to recover it. 00:34:25.323 [2024-12-13 19:28:59.680402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.323 [2024-12-13 19:28:59.680443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.323 [2024-12-13 19:28:59.680460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.323 [2024-12-13 19:28:59.680470] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.323 [2024-12-13 19:28:59.680478] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.323 [2024-12-13 19:28:59.690551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.323 qpair failed and we were unable to recover it. 00:34:25.583 [2024-12-13 19:28:59.700448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.583 [2024-12-13 19:28:59.700494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.583 [2024-12-13 19:28:59.700512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.583 [2024-12-13 19:28:59.700521] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.583 [2024-12-13 19:28:59.700530] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.583 [2024-12-13 19:28:59.710670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-12-13 19:28:59.720550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.583 [2024-12-13 19:28:59.720591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.583 [2024-12-13 19:28:59.720608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.583 [2024-12-13 19:28:59.720617] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.583 [2024-12-13 19:28:59.720625] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.583 [2024-12-13 19:28:59.730926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-12-13 19:28:59.740628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.583 [2024-12-13 19:28:59.740669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.583 [2024-12-13 19:28:59.740686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.583 [2024-12-13 19:28:59.740695] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.583 [2024-12-13 19:28:59.740704] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.583 [2024-12-13 19:28:59.750894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-12-13 19:28:59.760515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.583 [2024-12-13 19:28:59.760553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.583 [2024-12-13 19:28:59.760570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.583 [2024-12-13 19:28:59.760579] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.583 [2024-12-13 19:28:59.760588] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.583 [2024-12-13 19:28:59.770920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-12-13 19:28:59.780741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.583 [2024-12-13 19:28:59.780778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.583 [2024-12-13 19:28:59.780798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.583 [2024-12-13 19:28:59.780807] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.583 [2024-12-13 19:28:59.780816] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.583 [2024-12-13 19:28:59.791067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-12-13 19:28:59.800809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.583 [2024-12-13 19:28:59.800846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.583 [2024-12-13 19:28:59.800863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.583 [2024-12-13 19:28:59.800872] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.583 [2024-12-13 19:28:59.800881] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.583 [2024-12-13 19:28:59.810961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-12-13 19:28:59.820841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.583 [2024-12-13 19:28:59.820882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.583 [2024-12-13 19:28:59.820899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.583 [2024-12-13 19:28:59.820908] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.583 [2024-12-13 19:28:59.820916] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.583 [2024-12-13 19:28:59.831006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-12-13 19:28:59.840936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.583 [2024-12-13 19:28:59.840976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.583 [2024-12-13 19:28:59.840993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.583 [2024-12-13 19:28:59.841002] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.584 [2024-12-13 19:28:59.841011] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.584 [2024-12-13 19:28:59.851183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-12-13 19:28:59.860919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.584 [2024-12-13 19:28:59.860958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.584 [2024-12-13 19:28:59.860974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.584 [2024-12-13 19:28:59.860983] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.584 [2024-12-13 19:28:59.860995] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.584 [2024-12-13 19:28:59.871054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-12-13 19:28:59.880975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.584 [2024-12-13 19:28:59.881016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.584 [2024-12-13 19:28:59.881033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.584 [2024-12-13 19:28:59.881046] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.584 [2024-12-13 19:28:59.881055] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.584 [2024-12-13 19:28:59.891244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-12-13 19:28:59.901082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.584 [2024-12-13 19:28:59.901125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.584 [2024-12-13 19:28:59.901142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.584 [2024-12-13 19:28:59.901151] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.584 [2024-12-13 19:28:59.901159] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.584 [2024-12-13 19:28:59.911402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-12-13 19:28:59.921124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.584 [2024-12-13 19:28:59.921165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.584 [2024-12-13 19:28:59.921182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.584 [2024-12-13 19:28:59.921191] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.584 [2024-12-13 19:28:59.921199] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.584 [2024-12-13 19:28:59.931440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-12-13 19:28:59.941314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.584 [2024-12-13 19:28:59.941361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.584 [2024-12-13 19:28:59.941378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.584 [2024-12-13 19:28:59.941387] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.584 [2024-12-13 19:28:59.941395] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.584 [2024-12-13 19:28:59.951420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.844 [2024-12-13 19:28:59.961249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.844 [2024-12-13 19:28:59.961292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.844 [2024-12-13 19:28:59.961310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.844 [2024-12-13 19:28:59.961319] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.844 [2024-12-13 19:28:59.961327] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.844 [2024-12-13 19:28:59.971571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-12-13 19:28:59.981333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.844 [2024-12-13 19:28:59.981373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.844 [2024-12-13 19:28:59.981390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.844 [2024-12-13 19:28:59.981399] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.844 [2024-12-13 19:28:59.981407] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.844 [2024-12-13 19:28:59.991616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-12-13 19:29:00.001323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.844 [2024-12-13 19:29:00.001362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.844 [2024-12-13 19:29:00.001380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.844 [2024-12-13 19:29:00.001389] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.844 [2024-12-13 19:29:00.001398] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.844 [2024-12-13 19:29:00.011538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-12-13 19:29:00.021376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.844 [2024-12-13 19:29:00.021413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.844 [2024-12-13 19:29:00.021430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.844 [2024-12-13 19:29:00.021440] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.844 [2024-12-13 19:29:00.021449] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.844 [2024-12-13 19:29:00.031560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-12-13 19:29:00.041481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.844 [2024-12-13 19:29:00.041529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.844 [2024-12-13 19:29:00.041546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.844 [2024-12-13 19:29:00.041556] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.844 [2024-12-13 19:29:00.041565] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.844 [2024-12-13 19:29:00.051635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-12-13 19:29:00.061591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.844 [2024-12-13 19:29:00.061634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.844 [2024-12-13 19:29:00.061651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.844 [2024-12-13 19:29:00.061660] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.844 [2024-12-13 19:29:00.061668] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.845 [2024-12-13 19:29:00.071853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-12-13 19:29:00.081607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.845 [2024-12-13 19:29:00.081655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.845 [2024-12-13 19:29:00.081672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.845 [2024-12-13 19:29:00.081681] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.845 [2024-12-13 19:29:00.081689] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.845 [2024-12-13 19:29:00.091881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-12-13 19:29:00.101618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.845 [2024-12-13 19:29:00.101657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.845 [2024-12-13 19:29:00.101674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.845 [2024-12-13 19:29:00.101683] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.845 [2024-12-13 19:29:00.101692] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.845 [2024-12-13 19:29:00.111834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-12-13 19:29:00.121698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.845 [2024-12-13 19:29:00.121739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.845 [2024-12-13 19:29:00.121759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.845 [2024-12-13 19:29:00.121768] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.845 [2024-12-13 19:29:00.121777] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.845 [2024-12-13 19:29:00.131968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-12-13 19:29:00.141766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.845 [2024-12-13 19:29:00.141810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.845 [2024-12-13 19:29:00.141827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.845 [2024-12-13 19:29:00.141837] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.845 [2024-12-13 19:29:00.141845] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.845 [2024-12-13 19:29:00.152036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-12-13 19:29:00.161831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.845 [2024-12-13 19:29:00.161875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.845 [2024-12-13 19:29:00.161892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.845 [2024-12-13 19:29:00.161901] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.845 [2024-12-13 19:29:00.161909] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.845 [2024-12-13 19:29:00.172073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-12-13 19:29:00.181924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.845 [2024-12-13 19:29:00.181960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.845 [2024-12-13 19:29:00.181978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.845 [2024-12-13 19:29:00.181987] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.845 [2024-12-13 19:29:00.181995] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.845 [2024-12-13 19:29:00.192147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-12-13 19:29:00.201957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.845 [2024-12-13 19:29:00.202003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.845 [2024-12-13 19:29:00.202020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.845 [2024-12-13 19:29:00.202029] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.845 [2024-12-13 19:29:00.202046] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:25.845 [2024-12-13 19:29:00.212404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.845 qpair failed and we were unable to recover it. 00:34:26.105 [2024-12-13 19:29:00.221963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.105 [2024-12-13 19:29:00.222005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.105 [2024-12-13 19:29:00.222022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.105 [2024-12-13 19:29:00.222031] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.105 [2024-12-13 19:29:00.222040] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.105 [2024-12-13 19:29:00.232424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.105 qpair failed and we were unable to recover it. 00:34:26.105 [2024-12-13 19:29:00.242216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.105 [2024-12-13 19:29:00.242259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.105 [2024-12-13 19:29:00.242276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.105 [2024-12-13 19:29:00.242285] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.105 [2024-12-13 19:29:00.242294] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.105 [2024-12-13 19:29:00.252431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.105 qpair failed and we were unable to recover it. 00:34:26.105 [2024-12-13 19:29:00.262194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.105 [2024-12-13 19:29:00.262236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.105 [2024-12-13 19:29:00.262253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.106 [2024-12-13 19:29:00.262263] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.106 [2024-12-13 19:29:00.262271] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.106 [2024-12-13 19:29:00.272514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.106 qpair failed and we were unable to recover it. 00:34:26.106 [2024-12-13 19:29:00.282334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.106 [2024-12-13 19:29:00.282371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.106 [2024-12-13 19:29:00.282388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.106 [2024-12-13 19:29:00.282397] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.106 [2024-12-13 19:29:00.282405] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.106 [2024-12-13 19:29:00.292602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.106 qpair failed and we were unable to recover it. 00:34:26.106 [2024-12-13 19:29:00.302255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.106 [2024-12-13 19:29:00.302295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.106 [2024-12-13 19:29:00.302313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.106 [2024-12-13 19:29:00.302322] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.106 [2024-12-13 19:29:00.302330] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.106 [2024-12-13 19:29:00.312631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.106 qpair failed and we were unable to recover it. 00:34:26.106 [2024-12-13 19:29:00.322285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.106 [2024-12-13 19:29:00.322328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.106 [2024-12-13 19:29:00.322344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.106 [2024-12-13 19:29:00.322353] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.106 [2024-12-13 19:29:00.322362] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.106 [2024-12-13 19:29:00.332595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.106 qpair failed and we were unable to recover it. 00:34:26.106 [2024-12-13 19:29:00.342400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.106 [2024-12-13 19:29:00.342442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.106 [2024-12-13 19:29:00.342459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.106 [2024-12-13 19:29:00.342468] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.106 [2024-12-13 19:29:00.342477] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.106 [2024-12-13 19:29:00.352624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.106 qpair failed and we were unable to recover it. 00:34:26.106 [2024-12-13 19:29:00.362553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.106 [2024-12-13 19:29:00.362594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.106 [2024-12-13 19:29:00.362611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.106 [2024-12-13 19:29:00.362621] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.106 [2024-12-13 19:29:00.362629] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.106 [2024-12-13 19:29:00.372850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.106 qpair failed and we were unable to recover it. 00:34:26.106 [2024-12-13 19:29:00.382717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.106 [2024-12-13 19:29:00.382764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.106 [2024-12-13 19:29:00.382781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.106 [2024-12-13 19:29:00.382790] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.106 [2024-12-13 19:29:00.382798] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.106 [2024-12-13 19:29:00.392701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.106 qpair failed and we were unable to recover it. 00:34:26.106 [2024-12-13 19:29:00.402585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.106 [2024-12-13 19:29:00.402626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.106 [2024-12-13 19:29:00.402643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.106 [2024-12-13 19:29:00.402652] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.106 [2024-12-13 19:29:00.402660] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.106 [2024-12-13 19:29:00.412882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.106 qpair failed and we were unable to recover it. 00:34:26.106 [2024-12-13 19:29:00.422720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.106 [2024-12-13 19:29:00.422764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.106 [2024-12-13 19:29:00.422780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.106 [2024-12-13 19:29:00.422789] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.106 [2024-12-13 19:29:00.422798] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.106 [2024-12-13 19:29:00.433050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.106 qpair failed and we were unable to recover it. 00:34:26.106 [2024-12-13 19:29:00.442768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.106 [2024-12-13 19:29:00.442810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.106 [2024-12-13 19:29:00.442827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.106 [2024-12-13 19:29:00.442836] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.106 [2024-12-13 19:29:00.442844] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.106 [2024-12-13 19:29:00.452972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.106 qpair failed and we were unable to recover it. 00:34:26.106 [2024-12-13 19:29:00.462764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.106 [2024-12-13 19:29:00.462805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.106 [2024-12-13 19:29:00.462825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.106 [2024-12-13 19:29:00.462834] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.106 [2024-12-13 19:29:00.462843] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.106 [2024-12-13 19:29:00.472963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.106 qpair failed and we were unable to recover it. 00:34:26.366 [2024-12-13 19:29:00.482841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.366 [2024-12-13 19:29:00.482883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.366 [2024-12-13 19:29:00.482900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.366 [2024-12-13 19:29:00.482909] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.366 [2024-12-13 19:29:00.482918] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.366 [2024-12-13 19:29:00.493117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.366 qpair failed and we were unable to recover it. 00:34:26.366 [2024-12-13 19:29:00.502803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.366 [2024-12-13 19:29:00.502844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.366 [2024-12-13 19:29:00.502861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.366 [2024-12-13 19:29:00.502870] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.366 [2024-12-13 19:29:00.502878] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.366 [2024-12-13 19:29:00.513053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.366 qpair failed and we were unable to recover it. 00:34:26.366 [2024-12-13 19:29:00.522734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.366 [2024-12-13 19:29:00.522776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.366 [2024-12-13 19:29:00.522793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.366 [2024-12-13 19:29:00.522802] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.366 [2024-12-13 19:29:00.522810] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.366 [2024-12-13 19:29:00.533165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.366 qpair failed and we were unable to recover it. 00:34:26.366 [2024-12-13 19:29:00.542957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.366 [2024-12-13 19:29:00.542997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.366 [2024-12-13 19:29:00.543014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.366 [2024-12-13 19:29:00.543023] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.366 [2024-12-13 19:29:00.543035] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.366 [2024-12-13 19:29:00.553254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.366 qpair failed and we were unable to recover it. 00:34:26.366 [2024-12-13 19:29:00.563017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.366 [2024-12-13 19:29:00.563068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.366 [2024-12-13 19:29:00.563085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.366 [2024-12-13 19:29:00.563094] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.366 [2024-12-13 19:29:00.563103] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.366 [2024-12-13 19:29:00.573175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.366 qpair failed and we were unable to recover it. 00:34:26.366 [2024-12-13 19:29:00.583090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.366 [2024-12-13 19:29:00.583128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.366 [2024-12-13 19:29:00.583144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.366 [2024-12-13 19:29:00.583153] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.366 [2024-12-13 19:29:00.583161] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.366 [2024-12-13 19:29:00.593397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.366 qpair failed and we were unable to recover it. 00:34:26.366 [2024-12-13 19:29:00.603156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.366 [2024-12-13 19:29:00.603198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.366 [2024-12-13 19:29:00.603215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.366 [2024-12-13 19:29:00.603224] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.366 [2024-12-13 19:29:00.603232] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.366 [2024-12-13 19:29:00.613571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.366 qpair failed and we were unable to recover it. 00:34:26.366 [2024-12-13 19:29:00.623210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.366 [2024-12-13 19:29:00.623251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.366 [2024-12-13 19:29:00.623268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.366 [2024-12-13 19:29:00.623277] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.366 [2024-12-13 19:29:00.623285] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.366 [2024-12-13 19:29:00.633550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.366 qpair failed and we were unable to recover it. 00:34:26.366 [2024-12-13 19:29:00.643280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.366 [2024-12-13 19:29:00.643318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.367 [2024-12-13 19:29:00.643335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.367 [2024-12-13 19:29:00.643344] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.367 [2024-12-13 19:29:00.643352] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.367 [2024-12-13 19:29:00.653548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.367 qpair failed and we were unable to recover it. 00:34:26.367 [2024-12-13 19:29:00.663367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.367 [2024-12-13 19:29:00.663410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.367 [2024-12-13 19:29:00.663427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.367 [2024-12-13 19:29:00.663436] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.367 [2024-12-13 19:29:00.663445] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.367 [2024-12-13 19:29:00.673563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.367 qpair failed and we were unable to recover it. 00:34:26.367 [2024-12-13 19:29:00.683471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.367 [2024-12-13 19:29:00.683513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.367 [2024-12-13 19:29:00.683530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.367 [2024-12-13 19:29:00.683539] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.367 [2024-12-13 19:29:00.683547] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.367 [2024-12-13 19:29:00.693668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.367 qpair failed and we were unable to recover it. 00:34:26.367 [2024-12-13 19:29:00.703495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.367 [2024-12-13 19:29:00.703535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.367 [2024-12-13 19:29:00.703552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.367 [2024-12-13 19:29:00.703561] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.367 [2024-12-13 19:29:00.703569] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.367 [2024-12-13 19:29:00.713680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.367 qpair failed and we were unable to recover it. 00:34:26.367 [2024-12-13 19:29:00.723541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.367 [2024-12-13 19:29:00.723584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.367 [2024-12-13 19:29:00.723604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.367 [2024-12-13 19:29:00.723613] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.367 [2024-12-13 19:29:00.723621] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.367 [2024-12-13 19:29:00.733959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.367 qpair failed and we were unable to recover it. 00:34:26.627 [2024-12-13 19:29:00.743554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.627 [2024-12-13 19:29:00.743597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.627 [2024-12-13 19:29:00.743614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.627 [2024-12-13 19:29:00.743624] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.627 [2024-12-13 19:29:00.743632] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.627 [2024-12-13 19:29:00.753880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.627 qpair failed and we were unable to recover it. 00:34:26.627 [2024-12-13 19:29:00.763706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.627 [2024-12-13 19:29:00.763750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.627 [2024-12-13 19:29:00.763768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.627 [2024-12-13 19:29:00.763777] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.627 [2024-12-13 19:29:00.763785] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.627 [2024-12-13 19:29:00.773751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.627 qpair failed and we were unable to recover it. 00:34:26.627 [2024-12-13 19:29:00.783803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.627 [2024-12-13 19:29:00.783846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.627 [2024-12-13 19:29:00.783863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.627 [2024-12-13 19:29:00.783872] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.627 [2024-12-13 19:29:00.783881] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.627 [2024-12-13 19:29:00.793959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.627 qpair failed and we were unable to recover it. 00:34:26.627 [2024-12-13 19:29:00.803809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.627 [2024-12-13 19:29:00.803856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.627 [2024-12-13 19:29:00.803872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.627 [2024-12-13 19:29:00.803885] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.627 [2024-12-13 19:29:00.803893] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.627 [2024-12-13 19:29:00.814066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.627 qpair failed and we were unable to recover it. 00:34:26.627 [2024-12-13 19:29:00.823940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.627 [2024-12-13 19:29:00.823982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.627 [2024-12-13 19:29:00.823999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.627 [2024-12-13 19:29:00.824008] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.627 [2024-12-13 19:29:00.824016] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.627 [2024-12-13 19:29:00.834105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.627 qpair failed and we were unable to recover it. 00:34:26.627 [2024-12-13 19:29:00.844012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.627 [2024-12-13 19:29:00.844059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.627 [2024-12-13 19:29:00.844076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.627 [2024-12-13 19:29:00.844085] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.627 [2024-12-13 19:29:00.844094] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.627 [2024-12-13 19:29:00.854064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.627 qpair failed and we were unable to recover it. 00:34:26.627 [2024-12-13 19:29:00.863985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.627 [2024-12-13 19:29:00.864026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.627 [2024-12-13 19:29:00.864048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.627 [2024-12-13 19:29:00.864057] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.627 [2024-12-13 19:29:00.864066] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.627 [2024-12-13 19:29:00.874265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.627 qpair failed and we were unable to recover it. 00:34:26.627 [2024-12-13 19:29:00.884057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.627 [2024-12-13 19:29:00.884103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.627 [2024-12-13 19:29:00.884120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.627 [2024-12-13 19:29:00.884129] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.627 [2024-12-13 19:29:00.884137] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.627 [2024-12-13 19:29:00.894187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.627 qpair failed and we were unable to recover it. 00:34:26.627 [2024-12-13 19:29:00.904108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.627 [2024-12-13 19:29:00.904151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.627 [2024-12-13 19:29:00.904168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.627 [2024-12-13 19:29:00.904177] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.627 [2024-12-13 19:29:00.904186] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.627 [2024-12-13 19:29:00.914148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.627 qpair failed and we were unable to recover it. 00:34:26.627 [2024-12-13 19:29:00.924061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.628 [2024-12-13 19:29:00.924099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.628 [2024-12-13 19:29:00.924115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.628 [2024-12-13 19:29:00.924124] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.628 [2024-12-13 19:29:00.924133] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.628 [2024-12-13 19:29:00.934440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.628 qpair failed and we were unable to recover it. 00:34:26.628 [2024-12-13 19:29:00.944116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.628 [2024-12-13 19:29:00.944157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.628 [2024-12-13 19:29:00.944174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.628 [2024-12-13 19:29:00.944183] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.628 [2024-12-13 19:29:00.944191] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.628 [2024-12-13 19:29:00.954264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.628 qpair failed and we were unable to recover it. 00:34:26.628 [2024-12-13 19:29:00.964185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.628 [2024-12-13 19:29:00.964231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.628 [2024-12-13 19:29:00.964248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.628 [2024-12-13 19:29:00.964257] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.628 [2024-12-13 19:29:00.964265] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.628 [2024-12-13 19:29:00.974461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.628 qpair failed and we were unable to recover it. 00:34:26.628 [2024-12-13 19:29:00.984387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.628 [2024-12-13 19:29:00.984427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.628 [2024-12-13 19:29:00.984444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.628 [2024-12-13 19:29:00.984453] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.628 [2024-12-13 19:29:00.984461] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.628 [2024-12-13 19:29:00.994416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.628 qpair failed and we were unable to recover it. 00:34:26.888 [2024-12-13 19:29:01.004385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.888 [2024-12-13 19:29:01.004423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.888 [2024-12-13 19:29:01.004439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.888 [2024-12-13 19:29:01.004449] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.888 [2024-12-13 19:29:01.004457] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.888 [2024-12-13 19:29:01.014552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-12-13 19:29:01.024463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.888 [2024-12-13 19:29:01.024505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.888 [2024-12-13 19:29:01.024522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.888 [2024-12-13 19:29:01.024531] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.888 [2024-12-13 19:29:01.024539] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.888 [2024-12-13 19:29:01.034588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-12-13 19:29:01.044421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.888 [2024-12-13 19:29:01.044462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.888 [2024-12-13 19:29:01.044479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.888 [2024-12-13 19:29:01.044488] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.888 [2024-12-13 19:29:01.044496] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.888 [2024-12-13 19:29:01.054662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-12-13 19:29:01.064559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.888 [2024-12-13 19:29:01.064599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.888 [2024-12-13 19:29:01.064619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.888 [2024-12-13 19:29:01.064628] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.888 [2024-12-13 19:29:01.064636] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.888 [2024-12-13 19:29:01.074812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-12-13 19:29:01.084668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.888 [2024-12-13 19:29:01.084711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.888 [2024-12-13 19:29:01.084728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.888 [2024-12-13 19:29:01.084737] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.888 [2024-12-13 19:29:01.084745] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.888 [2024-12-13 19:29:01.094823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-12-13 19:29:01.104729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.888 [2024-12-13 19:29:01.104769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.888 [2024-12-13 19:29:01.104786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.888 [2024-12-13 19:29:01.104796] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.888 [2024-12-13 19:29:01.104804] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.888 [2024-12-13 19:29:01.114844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-12-13 19:29:01.124582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.888 [2024-12-13 19:29:01.124623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.888 [2024-12-13 19:29:01.124640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.888 [2024-12-13 19:29:01.124649] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.888 [2024-12-13 19:29:01.124657] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.888 [2024-12-13 19:29:01.135003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-12-13 19:29:01.144751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.888 [2024-12-13 19:29:01.144791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.888 [2024-12-13 19:29:01.144808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.888 [2024-12-13 19:29:01.144820] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.888 [2024-12-13 19:29:01.144828] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.888 [2024-12-13 19:29:01.154807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-12-13 19:29:01.164780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.888 [2024-12-13 19:29:01.164823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.888 [2024-12-13 19:29:01.164840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.888 [2024-12-13 19:29:01.164849] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.888 [2024-12-13 19:29:01.164858] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.888 [2024-12-13 19:29:01.175087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-12-13 19:29:01.184774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.888 [2024-12-13 19:29:01.184814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.888 [2024-12-13 19:29:01.184831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.888 [2024-12-13 19:29:01.184840] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.888 [2024-12-13 19:29:01.184848] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.888 [2024-12-13 19:29:01.195189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-12-13 19:29:01.204908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.888 [2024-12-13 19:29:01.204949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.888 [2024-12-13 19:29:01.204965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.888 [2024-12-13 19:29:01.204974] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.888 [2024-12-13 19:29:01.204983] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.888 [2024-12-13 19:29:01.215095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-12-13 19:29:01.224823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.888 [2024-12-13 19:29:01.224861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.888 [2024-12-13 19:29:01.224877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.888 [2024-12-13 19:29:01.224886] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.888 [2024-12-13 19:29:01.224895] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.889 [2024-12-13 19:29:01.235201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-12-13 19:29:01.245147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.889 [2024-12-13 19:29:01.245206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.889 [2024-12-13 19:29:01.245223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.889 [2024-12-13 19:29:01.245232] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.889 [2024-12-13 19:29:01.245240] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:26.889 [2024-12-13 19:29:01.255210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.889 qpair failed and we were unable to recover it. 00:34:27.148 [2024-12-13 19:29:01.265075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.148 [2024-12-13 19:29:01.265118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.148 [2024-12-13 19:29:01.265134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.148 [2024-12-13 19:29:01.265144] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.148 [2024-12-13 19:29:01.265152] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.148 [2024-12-13 19:29:01.275340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.148 qpair failed and we were unable to recover it. 00:34:27.148 [2024-12-13 19:29:01.285136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.148 [2024-12-13 19:29:01.285183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.148 [2024-12-13 19:29:01.285200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.148 [2024-12-13 19:29:01.285209] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.149 [2024-12-13 19:29:01.285218] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.149 [2024-12-13 19:29:01.295418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.149 qpair failed and we were unable to recover it. 00:34:27.149 [2024-12-13 19:29:01.305202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.149 [2024-12-13 19:29:01.305245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.149 [2024-12-13 19:29:01.305262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.149 [2024-12-13 19:29:01.305271] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.149 [2024-12-13 19:29:01.305279] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.149 [2024-12-13 19:29:01.315491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.149 qpair failed and we were unable to recover it. 00:34:27.149 [2024-12-13 19:29:01.325238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.149 [2024-12-13 19:29:01.325275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.149 [2024-12-13 19:29:01.325291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.149 [2024-12-13 19:29:01.325300] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.149 [2024-12-13 19:29:01.325309] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.149 [2024-12-13 19:29:01.335399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.149 qpair failed and we were unable to recover it. 00:34:27.149 [2024-12-13 19:29:01.345298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.149 [2024-12-13 19:29:01.345339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.149 [2024-12-13 19:29:01.345355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.149 [2024-12-13 19:29:01.345364] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.149 [2024-12-13 19:29:01.345372] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.149 [2024-12-13 19:29:01.355566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.149 qpair failed and we were unable to recover it. 00:34:27.149 [2024-12-13 19:29:01.365251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.149 [2024-12-13 19:29:01.365295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.149 [2024-12-13 19:29:01.365313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.149 [2024-12-13 19:29:01.365322] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.149 [2024-12-13 19:29:01.365331] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.149 [2024-12-13 19:29:01.375629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.149 qpair failed and we were unable to recover it. 00:34:27.149 [2024-12-13 19:29:01.385472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.149 [2024-12-13 19:29:01.385517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.149 [2024-12-13 19:29:01.385533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.149 [2024-12-13 19:29:01.385542] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.149 [2024-12-13 19:29:01.385551] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.149 [2024-12-13 19:29:01.395652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.149 qpair failed and we were unable to recover it. 00:34:27.149 [2024-12-13 19:29:01.405496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.149 [2024-12-13 19:29:01.405533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.149 [2024-12-13 19:29:01.405554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.149 [2024-12-13 19:29:01.405563] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.149 [2024-12-13 19:29:01.405571] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.149 [2024-12-13 19:29:01.415683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.149 qpair failed and we were unable to recover it. 00:34:27.149 [2024-12-13 19:29:01.425435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.149 [2024-12-13 19:29:01.425475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.149 [2024-12-13 19:29:01.425492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.149 [2024-12-13 19:29:01.425501] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.149 [2024-12-13 19:29:01.425509] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.149 [2024-12-13 19:29:01.435697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.149 qpair failed and we were unable to recover it. 00:34:27.149 [2024-12-13 19:29:01.445427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.149 [2024-12-13 19:29:01.445474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.149 [2024-12-13 19:29:01.445490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.149 [2024-12-13 19:29:01.445500] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.149 [2024-12-13 19:29:01.445508] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.149 [2024-12-13 19:29:01.455824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.149 qpair failed and we were unable to recover it. 00:34:27.149 [2024-12-13 19:29:01.465714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.149 [2024-12-13 19:29:01.465756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.149 [2024-12-13 19:29:01.465773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.149 [2024-12-13 19:29:01.465782] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.149 [2024-12-13 19:29:01.465790] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.149 [2024-12-13 19:29:01.475974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.149 qpair failed and we were unable to recover it. 00:34:27.149 [2024-12-13 19:29:01.485614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.149 [2024-12-13 19:29:01.485650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.149 [2024-12-13 19:29:01.485666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.149 [2024-12-13 19:29:01.485679] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.149 [2024-12-13 19:29:01.485687] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.149 [2024-12-13 19:29:01.495986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.149 qpair failed and we were unable to recover it. 00:34:27.149 [2024-12-13 19:29:01.505756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.149 [2024-12-13 19:29:01.505796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.149 [2024-12-13 19:29:01.505812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.149 [2024-12-13 19:29:01.505822] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.149 [2024-12-13 19:29:01.505830] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.149 [2024-12-13 19:29:01.516006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.149 qpair failed and we were unable to recover it. 00:34:27.409 [2024-12-13 19:29:01.525886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.409 [2024-12-13 19:29:01.525932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.409 [2024-12-13 19:29:01.525949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.409 [2024-12-13 19:29:01.525959] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.409 [2024-12-13 19:29:01.525968] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.409 [2024-12-13 19:29:01.536029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.409 qpair failed and we were unable to recover it. 00:34:27.409 [2024-12-13 19:29:01.545856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.409 [2024-12-13 19:29:01.545898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.409 [2024-12-13 19:29:01.545915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.409 [2024-12-13 19:29:01.545924] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.409 [2024-12-13 19:29:01.545932] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.409 [2024-12-13 19:29:01.556140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.409 qpair failed and we were unable to recover it. 00:34:27.409 [2024-12-13 19:29:01.566005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.409 [2024-12-13 19:29:01.566044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.409 [2024-12-13 19:29:01.566061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.409 [2024-12-13 19:29:01.566070] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.409 [2024-12-13 19:29:01.566078] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.409 [2024-12-13 19:29:01.576308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.409 qpair failed and we were unable to recover it. 00:34:27.409 [2024-12-13 19:29:01.585963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.409 [2024-12-13 19:29:01.586005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.409 [2024-12-13 19:29:01.586022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.409 [2024-12-13 19:29:01.586031] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.409 [2024-12-13 19:29:01.586040] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.409 [2024-12-13 19:29:01.596264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.409 qpair failed and we were unable to recover it. 00:34:27.409 [2024-12-13 19:29:01.606075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.409 [2024-12-13 19:29:01.606114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.409 [2024-12-13 19:29:01.606131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.409 [2024-12-13 19:29:01.606140] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.409 [2024-12-13 19:29:01.606148] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.409 [2024-12-13 19:29:01.616412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.409 qpair failed and we were unable to recover it. 00:34:27.409 [2024-12-13 19:29:01.626071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.409 [2024-12-13 19:29:01.626108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.409 [2024-12-13 19:29:01.626125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.409 [2024-12-13 19:29:01.626134] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.409 [2024-12-13 19:29:01.626143] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.409 [2024-12-13 19:29:01.636551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.409 qpair failed and we were unable to recover it. 00:34:27.409 [2024-12-13 19:29:01.646220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.409 [2024-12-13 19:29:01.646259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.409 [2024-12-13 19:29:01.646276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.409 [2024-12-13 19:29:01.646285] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.409 [2024-12-13 19:29:01.646294] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.410 [2024-12-13 19:29:01.656434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.410 qpair failed and we were unable to recover it. 00:34:27.410 [2024-12-13 19:29:01.666220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.410 [2024-12-13 19:29:01.666261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.410 [2024-12-13 19:29:01.666278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.410 [2024-12-13 19:29:01.666287] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.410 [2024-12-13 19:29:01.666295] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.410 [2024-12-13 19:29:01.676575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.410 qpair failed and we were unable to recover it. 00:34:27.410 [2024-12-13 19:29:01.686327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.410 [2024-12-13 19:29:01.686366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.410 [2024-12-13 19:29:01.686382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.410 [2024-12-13 19:29:01.686391] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.410 [2024-12-13 19:29:01.686399] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.410 [2024-12-13 19:29:01.696588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.410 qpair failed and we were unable to recover it. 00:34:27.410 [2024-12-13 19:29:01.706355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.410 [2024-12-13 19:29:01.706393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.410 [2024-12-13 19:29:01.706410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.410 [2024-12-13 19:29:01.706419] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.410 [2024-12-13 19:29:01.706427] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.410 [2024-12-13 19:29:01.716636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.410 qpair failed and we were unable to recover it. 00:34:27.410 [2024-12-13 19:29:01.726341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.410 [2024-12-13 19:29:01.726381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.410 [2024-12-13 19:29:01.726398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.410 [2024-12-13 19:29:01.726407] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.410 [2024-12-13 19:29:01.726416] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.410 [2024-12-13 19:29:01.736798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.410 qpair failed and we were unable to recover it. 00:34:27.410 [2024-12-13 19:29:01.746404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.410 [2024-12-13 19:29:01.746445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.410 [2024-12-13 19:29:01.746465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.410 [2024-12-13 19:29:01.746474] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.410 [2024-12-13 19:29:01.746483] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.410 [2024-12-13 19:29:01.756717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.410 qpair failed and we were unable to recover it. 00:34:27.410 [2024-12-13 19:29:01.766523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.410 [2024-12-13 19:29:01.766568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.410 [2024-12-13 19:29:01.766586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.410 [2024-12-13 19:29:01.766595] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.410 [2024-12-13 19:29:01.766603] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.410 [2024-12-13 19:29:01.776816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.410 qpair failed and we were unable to recover it. 00:34:27.670 [2024-12-13 19:29:01.786549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.670 [2024-12-13 19:29:01.786595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.670 [2024-12-13 19:29:01.786612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.670 [2024-12-13 19:29:01.786621] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.670 [2024-12-13 19:29:01.786630] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.670 [2024-12-13 19:29:01.796983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.670 qpair failed and we were unable to recover it. 00:34:27.670 [2024-12-13 19:29:01.806663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.670 [2024-12-13 19:29:01.806701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.670 [2024-12-13 19:29:01.806719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.670 [2024-12-13 19:29:01.806728] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.670 [2024-12-13 19:29:01.806736] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.670 [2024-12-13 19:29:01.816834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.670 qpair failed and we were unable to recover it. 00:34:27.670 [2024-12-13 19:29:01.826740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.670 [2024-12-13 19:29:01.826780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.670 [2024-12-13 19:29:01.826796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.670 [2024-12-13 19:29:01.826805] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.670 [2024-12-13 19:29:01.826817] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.670 [2024-12-13 19:29:01.836993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.670 qpair failed and we were unable to recover it. 00:34:27.670 [2024-12-13 19:29:01.846928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.670 [2024-12-13 19:29:01.846970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.670 [2024-12-13 19:29:01.846987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.670 [2024-12-13 19:29:01.846996] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.670 [2024-12-13 19:29:01.847005] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.670 [2024-12-13 19:29:01.857154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.670 qpair failed and we were unable to recover it. 00:34:27.670 [2024-12-13 19:29:01.866874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.670 [2024-12-13 19:29:01.866918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.670 [2024-12-13 19:29:01.866934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.670 [2024-12-13 19:29:01.866943] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.670 [2024-12-13 19:29:01.866951] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.670 [2024-12-13 19:29:01.877227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.670 qpair failed and we were unable to recover it. 00:34:27.670 [2024-12-13 19:29:01.886900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.670 [2024-12-13 19:29:01.886937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.670 [2024-12-13 19:29:01.886953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.670 [2024-12-13 19:29:01.886962] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.670 [2024-12-13 19:29:01.886970] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.670 [2024-12-13 19:29:01.897309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.670 qpair failed and we were unable to recover it. 00:34:27.670 [2024-12-13 19:29:01.907025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.670 [2024-12-13 19:29:01.907069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.670 [2024-12-13 19:29:01.907087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.671 [2024-12-13 19:29:01.907095] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.671 [2024-12-13 19:29:01.907104] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.671 [2024-12-13 19:29:01.917191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.671 qpair failed and we were unable to recover it. 00:34:27.671 [2024-12-13 19:29:01.927109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.671 [2024-12-13 19:29:01.927153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.671 [2024-12-13 19:29:01.927170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.671 [2024-12-13 19:29:01.927179] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.671 [2024-12-13 19:29:01.927187] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.671 [2024-12-13 19:29:01.937196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.671 qpair failed and we were unable to recover it. 00:34:27.671 [2024-12-13 19:29:01.947098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.671 [2024-12-13 19:29:01.947143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.671 [2024-12-13 19:29:01.947160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.671 [2024-12-13 19:29:01.947169] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.671 [2024-12-13 19:29:01.947177] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.671 [2024-12-13 19:29:01.957259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.671 qpair failed and we were unable to recover it. 00:34:27.671 [2024-12-13 19:29:01.967184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.671 [2024-12-13 19:29:01.967226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.671 [2024-12-13 19:29:01.967243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.671 [2024-12-13 19:29:01.967252] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.671 [2024-12-13 19:29:01.967261] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.671 [2024-12-13 19:29:01.977433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.671 qpair failed and we were unable to recover it. 00:34:27.671 [2024-12-13 19:29:01.987175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.671 [2024-12-13 19:29:01.987218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.671 [2024-12-13 19:29:01.987234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.671 [2024-12-13 19:29:01.987243] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.671 [2024-12-13 19:29:01.987252] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.671 [2024-12-13 19:29:01.997499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.671 qpair failed and we were unable to recover it. 00:34:27.671 [2024-12-13 19:29:02.007299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.671 [2024-12-13 19:29:02.007346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.671 [2024-12-13 19:29:02.007363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.671 [2024-12-13 19:29:02.007372] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.671 [2024-12-13 19:29:02.007380] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.671 [2024-12-13 19:29:02.017500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.671 qpair failed and we were unable to recover it. 00:34:27.671 [2024-12-13 19:29:02.027439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.671 [2024-12-13 19:29:02.027483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.671 [2024-12-13 19:29:02.027500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.671 [2024-12-13 19:29:02.027509] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.671 [2024-12-13 19:29:02.027518] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.671 [2024-12-13 19:29:02.037724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.671 qpair failed and we were unable to recover it. 00:34:27.937 [2024-12-13 19:29:02.047317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.937 [2024-12-13 19:29:02.047356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.937 [2024-12-13 19:29:02.047373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.937 [2024-12-13 19:29:02.047382] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.937 [2024-12-13 19:29:02.047390] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.937 [2024-12-13 19:29:02.057698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.937 qpair failed and we were unable to recover it. 00:34:27.937 [2024-12-13 19:29:02.067519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.937 [2024-12-13 19:29:02.067559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.937 [2024-12-13 19:29:02.067575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.937 [2024-12-13 19:29:02.067584] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.937 [2024-12-13 19:29:02.067593] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.937 [2024-12-13 19:29:02.077711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.937 qpair failed and we were unable to recover it. 00:34:27.937 [2024-12-13 19:29:02.087510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.937 [2024-12-13 19:29:02.087559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.937 [2024-12-13 19:29:02.087579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.937 [2024-12-13 19:29:02.087588] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.937 [2024-12-13 19:29:02.087596] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.937 [2024-12-13 19:29:02.097815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.937 qpair failed and we were unable to recover it. 00:34:27.937 [2024-12-13 19:29:02.107628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.937 [2024-12-13 19:29:02.107666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.937 [2024-12-13 19:29:02.107683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.937 [2024-12-13 19:29:02.107692] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.937 [2024-12-13 19:29:02.107700] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.937 [2024-12-13 19:29:02.117826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.937 qpair failed and we were unable to recover it. 00:34:27.937 [2024-12-13 19:29:02.127587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.937 [2024-12-13 19:29:02.127631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.937 [2024-12-13 19:29:02.127648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.937 [2024-12-13 19:29:02.127657] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.937 [2024-12-13 19:29:02.127665] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.937 [2024-12-13 19:29:02.137892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.937 qpair failed and we were unable to recover it. 00:34:27.937 [2024-12-13 19:29:02.147668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.937 [2024-12-13 19:29:02.147709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.937 [2024-12-13 19:29:02.147725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.937 [2024-12-13 19:29:02.147734] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.937 [2024-12-13 19:29:02.147743] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.937 [2024-12-13 19:29:02.157990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.937 qpair failed and we were unable to recover it. 00:34:27.937 [2024-12-13 19:29:02.167785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.937 [2024-12-13 19:29:02.167831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.937 [2024-12-13 19:29:02.167848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.937 [2024-12-13 19:29:02.167857] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.937 [2024-12-13 19:29:02.167869] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.937 [2024-12-13 19:29:02.178069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.937 qpair failed and we were unable to recover it. 00:34:27.937 [2024-12-13 19:29:02.187682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.937 [2024-12-13 19:29:02.187720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.937 [2024-12-13 19:29:02.187737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.937 [2024-12-13 19:29:02.187746] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.937 [2024-12-13 19:29:02.187754] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.937 [2024-12-13 19:29:02.198132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.937 qpair failed and we were unable to recover it. 00:34:27.937 [2024-12-13 19:29:02.207814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.937 [2024-12-13 19:29:02.207848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.937 [2024-12-13 19:29:02.207865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.938 [2024-12-13 19:29:02.207874] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.938 [2024-12-13 19:29:02.207883] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.938 [2024-12-13 19:29:02.218117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.938 qpair failed and we were unable to recover it. 00:34:27.938 [2024-12-13 19:29:02.227886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.938 [2024-12-13 19:29:02.227926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.938 [2024-12-13 19:29:02.227943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.938 [2024-12-13 19:29:02.227952] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.938 [2024-12-13 19:29:02.227961] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.938 [2024-12-13 19:29:02.238204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.938 qpair failed and we were unable to recover it. 00:34:27.938 [2024-12-13 19:29:02.247993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.938 [2024-12-13 19:29:02.248029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.938 [2024-12-13 19:29:02.248059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.938 [2024-12-13 19:29:02.248068] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.938 [2024-12-13 19:29:02.248077] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.938 [2024-12-13 19:29:02.258330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.938 qpair failed and we were unable to recover it. 00:34:27.938 [2024-12-13 19:29:02.268109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.938 [2024-12-13 19:29:02.268150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.938 [2024-12-13 19:29:02.268167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.938 [2024-12-13 19:29:02.268176] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.938 [2024-12-13 19:29:02.268185] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.938 [2024-12-13 19:29:02.278193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.938 qpair failed and we were unable to recover it. 00:34:27.938 [2024-12-13 19:29:02.288069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.938 [2024-12-13 19:29:02.288118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.938 [2024-12-13 19:29:02.288141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.938 [2024-12-13 19:29:02.288150] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.938 [2024-12-13 19:29:02.288159] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:27.938 [2024-12-13 19:29:02.298309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.938 qpair failed and we were unable to recover it. 00:34:27.938 [2024-12-13 19:29:02.308387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.938 [2024-12-13 19:29:02.308428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.938 [2024-12-13 19:29:02.308445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.938 [2024-12-13 19:29:02.308454] nvme_rdma.c:1366:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.938 [2024-12-13 19:29:02.308463] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:28.281 [2024-12-13 19:29:02.318425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.281 qpair failed and we were unable to recover it. 00:34:28.281 [2024-12-13 19:29:02.318524] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:34:28.281 A controller has encountered a failure and is being reset. 00:34:28.281 [2024-12-13 19:29:02.318667] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:34:28.281 [2024-12-13 19:29:02.320794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:34:28.281 Controller properly reset. 00:34:29.067 Write completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Read completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Read completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Write completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Read completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Read completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Read completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Write completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Write completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Read completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Write completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Read completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Read completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Read completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Write completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Read completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Write completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Write completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Read completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Read completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Write completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Read completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Write completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Write completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Write completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Read completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Write completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Read completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Write completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Read completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Read completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 Read completed with error (sct=0, sc=8) 00:34:29.067 starting I/O failed 00:34:29.067 [2024-12-13 19:29:03.343575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.115 Write completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Write completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Read completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Write completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Write completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Write completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Read completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Write completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Read completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Write completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Write completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Read completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Write completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Read completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Read completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Write completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Read completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Write completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Write completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Write completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Read completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Read completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Read completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Write completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Read completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Read completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Read completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Write completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Write completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Read completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Write completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 Write completed with error (sct=0, sc=8) 00:34:30.115 starting I/O failed 00:34:30.115 [2024-12-13 19:29:04.359087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:30.115 Initializing NVMe Controllers 00:34:30.115 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:34:30.115 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:34:30.115 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:30.115 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:30.115 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:30.115 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:30.115 Initialization complete. Launching workers. 00:34:30.115 Starting thread on core 1 00:34:30.115 Starting thread on core 2 00:34:30.115 Starting thread on core 3 00:34:30.115 Starting thread on core 0 00:34:30.115 19:29:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:30.115 00:34:30.115 real 0m12.921s 00:34:30.115 user 0m24.349s 00:34:30.115 sys 0m3.321s 00:34:30.115 19:29:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:30.115 19:29:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:30.115 ************************************ 00:34:30.115 END TEST nvmf_target_disconnect_tc2 00:34:30.115 ************************************ 00:34:30.115 19:29:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:34:30.115 19:29:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:34:30.115 19:29:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:30.115 19:29:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:30.115 19:29:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:30.375 ************************************ 00:34:30.375 START TEST nvmf_target_disconnect_tc3 00:34:30.375 ************************************ 00:34:30.375 19:29:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc3 00:34:30.375 19:29:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=513683 00:34:30.375 19:29:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:34:30.375 19:29:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:34:32.279 19:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 512165 00:34:32.279 19:29:06 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:34:33.658 Write completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Write completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Write completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Write completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Write completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Write completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Write completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Read completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Write completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Read completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Read completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Read completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Read completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Write completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Read completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Write completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Read completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Write completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Write completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Read completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Read completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Read completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Write completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Write completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Read completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Read completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Read completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Read completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Write completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Write completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Write completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 Write completed with error (sct=0, sc=8) 00:34:33.658 starting I/O failed 00:34:33.658 [2024-12-13 19:29:07.711469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:34:34.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 512165 Killed "${NVMF_APP[@]}" "$@" 00:34:34.227 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:34:34.227 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:34.227 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:34.227 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:34.227 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:34.227 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@509 -- # nvmfpid=514292 00:34:34.227 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@510 -- # waitforlisten 514292 00:34:34.227 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:34.227 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # '[' -z 514292 ']' 00:34:34.227 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:34.227 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:34.227 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:34.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:34.227 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:34.227 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:34.227 [2024-12-13 19:29:08.572425] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:34:34.227 [2024-12-13 19:29:08.572480] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:34.487 [2024-12-13 19:29:08.664627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:34.487 [2024-12-13 19:29:08.685872] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:34.487 [2024-12-13 19:29:08.685912] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:34.487 [2024-12-13 19:29:08.685921] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:34.487 [2024-12-13 19:29:08.685930] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:34.487 [2024-12-13 19:29:08.685937] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:34.487 [2024-12-13 19:29:08.687714] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:34:34.487 [2024-12-13 19:29:08.687748] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:34:34.487 [2024-12-13 19:29:08.687779] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:34:34.487 [2024-12-13 19:29:08.687781] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:34:34.487 Read completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Read completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Read completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Read completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Read completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Write completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Write completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Read completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Read completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Read completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Write completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Write completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Write completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Read completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Read completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Read completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Read completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Write completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Read completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Read completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Read completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Read completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Write completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Read completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Read completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Read completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Write completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Read completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Write completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Write completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Read completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 Write completed with error (sct=0, sc=8) 00:34:34.487 starting I/O failed 00:34:34.487 [2024-12-13 19:29:08.716537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.487 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:34.487 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@868 -- # return 0 00:34:34.487 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:34.487 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:34.487 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:34.487 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:34.487 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:34.487 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.487 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:34.487 Malloc0 00:34:34.487 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.487 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:34:34.487 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.487 19:29:08 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:34.746 [2024-12-13 19:29:08.885348] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13d6c80/0x13e2c80) succeed. 00:34:34.746 [2024-12-13 19:29:08.894813] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13d82c0/0x1424320) succeed. 00:34:34.747 19:29:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.747 19:29:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:34.747 19:29:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.747 19:29:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:34.747 19:29:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.747 19:29:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:34.747 19:29:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.747 19:29:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:34.747 19:29:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.747 19:29:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:34:34.747 19:29:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.747 19:29:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:34.747 [2024-12-13 19:29:09.037545] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:34:34.747 19:29:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.747 19:29:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:34:34.747 19:29:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.747 19:29:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:34.747 19:29:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.747 19:29:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 513683 00:34:35.683 Read completed with error (sct=0, sc=8) 00:34:35.683 starting I/O failed 00:34:35.683 Write completed with error (sct=0, sc=8) 00:34:35.683 starting I/O failed 00:34:35.683 Write completed with error (sct=0, sc=8) 00:34:35.683 starting I/O failed 00:34:35.683 Read completed with error (sct=0, sc=8) 00:34:35.683 starting I/O failed 00:34:35.683 Read completed with error (sct=0, sc=8) 00:34:35.683 starting I/O failed 00:34:35.683 Write completed with error (sct=0, sc=8) 00:34:35.683 starting I/O failed 00:34:35.683 Write completed with error (sct=0, sc=8) 00:34:35.683 starting I/O failed 00:34:35.683 Read completed with error (sct=0, sc=8) 00:34:35.683 starting I/O failed 00:34:35.683 Write completed with error (sct=0, sc=8) 00:34:35.683 starting I/O failed 00:34:35.683 Write completed with error (sct=0, sc=8) 00:34:35.683 starting I/O failed 00:34:35.683 Read completed with error (sct=0, sc=8) 00:34:35.683 starting I/O failed 00:34:35.683 Write completed with error (sct=0, sc=8) 00:34:35.683 starting I/O failed 00:34:35.683 Write completed with error (sct=0, sc=8) 00:34:35.683 starting I/O failed 00:34:35.683 Read completed with error (sct=0, sc=8) 00:34:35.683 starting I/O failed 00:34:35.683 Write completed with error (sct=0, sc=8) 00:34:35.683 starting I/O failed 00:34:35.683 Read completed with error (sct=0, sc=8) 00:34:35.683 starting I/O failed 00:34:35.683 Write completed with error (sct=0, sc=8) 00:34:35.683 starting I/O failed 00:34:35.684 Read completed with error (sct=0, sc=8) 00:34:35.684 starting I/O failed 00:34:35.684 Read completed with error (sct=0, sc=8) 00:34:35.684 starting I/O failed 00:34:35.684 Read completed with error (sct=0, sc=8) 00:34:35.684 starting I/O failed 00:34:35.684 Read completed with error (sct=0, sc=8) 00:34:35.684 starting I/O failed 00:34:35.684 Write completed with error (sct=0, sc=8) 00:34:35.684 starting I/O failed 00:34:35.684 Read completed with error (sct=0, sc=8) 00:34:35.684 starting I/O failed 00:34:35.684 Read completed with error (sct=0, sc=8) 00:34:35.684 starting I/O failed 00:34:35.684 Read completed with error (sct=0, sc=8) 00:34:35.684 starting I/O failed 00:34:35.684 Write completed with error (sct=0, sc=8) 00:34:35.684 starting I/O failed 00:34:35.684 Read completed with error (sct=0, sc=8) 00:34:35.684 starting I/O failed 00:34:35.684 Write completed with error (sct=0, sc=8) 00:34:35.684 starting I/O failed 00:34:35.684 Read completed with error (sct=0, sc=8) 00:34:35.684 starting I/O failed 00:34:35.684 Read completed with error (sct=0, sc=8) 00:34:35.684 starting I/O failed 00:34:35.684 Read completed with error (sct=0, sc=8) 00:34:35.684 starting I/O failed 00:34:35.684 Write completed with error (sct=0, sc=8) 00:34:35.684 starting I/O failed 00:34:35.684 [2024-12-13 19:29:09.721545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:34:35.684 [2024-12-13 19:29:09.723206] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:35.684 [2024-12-13 19:29:09.723229] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:35.684 [2024-12-13 19:29:09.723237] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:34:36.624 [2024-12-13 19:29:10.727198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:34:36.624 qpair failed and we were unable to recover it. 00:34:36.624 [2024-12-13 19:29:10.728882] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:36.624 [2024-12-13 19:29:10.728903] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:36.624 [2024-12-13 19:29:10.728912] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:34:37.561 [2024-12-13 19:29:11.732772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.561 qpair failed and we were unable to recover it. 00:34:37.561 [2024-12-13 19:29:11.734256] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:37.561 [2024-12-13 19:29:11.734274] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:37.561 [2024-12-13 19:29:11.734283] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:34:38.499 [2024-12-13 19:29:12.738082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.499 qpair failed and we were unable to recover it. 00:34:38.499 [2024-12-13 19:29:12.739551] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:38.499 [2024-12-13 19:29:12.739569] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:38.499 [2024-12-13 19:29:12.739578] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:34:39.436 [2024-12-13 19:29:13.743339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.436 qpair failed and we were unable to recover it. 00:34:39.436 [2024-12-13 19:29:13.744710] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:39.436 [2024-12-13 19:29:13.744729] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:39.436 [2024-12-13 19:29:13.744741] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:34:40.373 [2024-12-13 19:29:14.748459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.373 qpair failed and we were unable to recover it. 00:34:40.632 [2024-12-13 19:29:14.750038] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:40.632 [2024-12-13 19:29:14.750060] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:40.632 [2024-12-13 19:29:14.750068] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:34:41.568 [2024-12-13 19:29:15.753886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.568 qpair failed and we were unable to recover it. 00:34:41.568 [2024-12-13 19:29:15.755366] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:41.568 [2024-12-13 19:29:15.755386] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:41.568 [2024-12-13 19:29:15.755394] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:34:42.504 [2024-12-13 19:29:16.759298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.504 qpair failed and we were unable to recover it. 00:34:42.504 [2024-12-13 19:29:16.761422] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:42.504 [2024-12-13 19:29:16.761483] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:42.504 [2024-12-13 19:29:16.761512] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:43.440 [2024-12-13 19:29:17.765395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:34:43.440 qpair failed and we were unable to recover it. 00:34:43.440 [2024-12-13 19:29:17.766866] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:43.440 [2024-12-13 19:29:17.766884] nvme_rdma.c:1111:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:43.440 [2024-12-13 19:29:17.766892] nvme_rdma.c:3000:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:34:44.817 [2024-12-13 19:29:18.770640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:34:44.817 qpair failed and we were unable to recover it. 00:34:44.817 [2024-12-13 19:29:18.770763] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Submitting Keep Alive failed 00:34:44.817 A controller has encountered a failure and is being reset. 00:34:44.817 Resorting to new failover address 192.168.100.9 00:34:44.817 [2024-12-13 19:29:18.770857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:44.817 [2024-12-13 19:29:18.770922] nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:34:44.817 [2024-12-13 19:29:18.772770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:34:44.817 Controller properly reset. 00:34:45.754 Read completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Read completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Read completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Write completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Read completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Write completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Read completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Write completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Read completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Read completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Read completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Write completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Read completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Write completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Read completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Write completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Read completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Write completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Read completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Write completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Read completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Read completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Read completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Read completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Read completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Read completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Read completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Write completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Write completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Write completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Read completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 Write completed with error (sct=0, sc=8) 00:34:45.755 starting I/O failed 00:34:45.755 [2024-12-13 19:29:19.818746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:45.755 Initializing NVMe Controllers 00:34:45.755 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:34:45.755 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:34:45.755 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:45.755 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:45.755 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:45.755 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:45.755 Initialization complete. Launching workers. 00:34:45.755 Starting thread on core 1 00:34:45.755 Starting thread on core 2 00:34:45.755 Starting thread on core 3 00:34:45.755 Starting thread on core 0 00:34:45.755 19:29:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:34:45.755 00:34:45.755 real 0m15.370s 00:34:45.755 user 1m0.312s 00:34:45.755 sys 0m4.550s 00:34:45.755 19:29:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:45.755 19:29:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:45.755 ************************************ 00:34:45.755 END TEST nvmf_target_disconnect_tc3 00:34:45.755 ************************************ 00:34:45.755 19:29:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:45.755 19:29:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:45.755 19:29:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:45.755 19:29:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:34:45.755 19:29:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:34:45.755 19:29:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:34:45.755 19:29:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:34:45.755 19:29:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:45.755 19:29:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:34:45.755 rmmod nvme_rdma 00:34:45.755 rmmod nvme_fabrics 00:34:45.755 19:29:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:45.755 19:29:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:34:45.755 19:29:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:34:45.755 19:29:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 514292 ']' 00:34:45.755 19:29:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 514292 00:34:45.755 19:29:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 514292 ']' 00:34:45.755 19:29:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 514292 00:34:45.755 19:29:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:34:45.755 19:29:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:45.755 19:29:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 514292 00:34:45.755 19:29:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:34:45.755 19:29:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:34:45.755 19:29:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 514292' 00:34:45.755 killing process with pid 514292 00:34:45.755 19:29:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 514292 00:34:45.755 19:29:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 514292 00:34:46.015 19:29:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:46.015 19:29:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:34:46.015 00:34:46.015 real 0m37.571s 00:34:46.015 user 2m13.732s 00:34:46.015 sys 0m14.211s 00:34:46.015 19:29:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:46.015 19:29:20 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:46.015 ************************************ 00:34:46.015 END TEST nvmf_target_disconnect 00:34:46.015 ************************************ 00:34:46.015 19:29:20 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:46.015 00:34:46.015 real 7m25.390s 00:34:46.015 user 20m40.935s 00:34:46.015 sys 1m47.800s 00:34:46.015 19:29:20 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:46.015 19:29:20 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.015 ************************************ 00:34:46.015 END TEST nvmf_host 00:34:46.015 ************************************ 00:34:46.274 19:29:20 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:34:46.274 00:34:46.274 real 27m37.111s 00:34:46.274 user 79m24.685s 00:34:46.274 sys 6m53.814s 00:34:46.274 19:29:20 nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:46.274 19:29:20 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:46.274 ************************************ 00:34:46.274 END TEST nvmf_rdma 00:34:46.274 ************************************ 00:34:46.274 19:29:20 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:34:46.274 19:29:20 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:46.275 19:29:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:46.275 19:29:20 -- common/autotest_common.sh@10 -- # set +x 00:34:46.275 ************************************ 00:34:46.275 START TEST spdkcli_nvmf_rdma 00:34:46.275 ************************************ 00:34:46.275 19:29:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:34:46.275 * Looking for test storage... 00:34:46.275 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:34:46.275 19:29:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:46.275 19:29:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # lcov --version 00:34:46.275 19:29:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:46.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:46.535 --rc genhtml_branch_coverage=1 00:34:46.535 --rc genhtml_function_coverage=1 00:34:46.535 --rc genhtml_legend=1 00:34:46.535 --rc geninfo_all_blocks=1 00:34:46.535 --rc geninfo_unexecuted_blocks=1 00:34:46.535 00:34:46.535 ' 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:46.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:46.535 --rc genhtml_branch_coverage=1 00:34:46.535 --rc genhtml_function_coverage=1 00:34:46.535 --rc genhtml_legend=1 00:34:46.535 --rc geninfo_all_blocks=1 00:34:46.535 --rc geninfo_unexecuted_blocks=1 00:34:46.535 00:34:46.535 ' 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:46.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:46.535 --rc genhtml_branch_coverage=1 00:34:46.535 --rc genhtml_function_coverage=1 00:34:46.535 --rc genhtml_legend=1 00:34:46.535 --rc geninfo_all_blocks=1 00:34:46.535 --rc geninfo_unexecuted_blocks=1 00:34:46.535 00:34:46.535 ' 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:46.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:46.535 --rc genhtml_branch_coverage=1 00:34:46.535 --rc genhtml_function_coverage=1 00:34:46.535 --rc genhtml_legend=1 00:34:46.535 --rc geninfo_all_blocks=1 00:34:46.535 --rc geninfo_unexecuted_blocks=1 00:34:46.535 00:34:46.535 ' 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:46.535 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:46.535 19:29:20 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:46.536 19:29:20 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:46.536 19:29:20 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:46.536 19:29:20 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:46.536 19:29:20 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:46.536 19:29:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:46.536 19:29:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:46.536 19:29:20 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:46.536 19:29:20 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=516469 00:34:46.536 19:29:20 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 516469 00:34:46.536 19:29:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # '[' -z 516469 ']' 00:34:46.536 19:29:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:46.536 19:29:20 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:46.536 19:29:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:46.536 19:29:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:46.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:46.536 19:29:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:46.536 19:29:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:46.536 [2024-12-13 19:29:20.776561] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:34:46.536 [2024-12-13 19:29:20.776613] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid516469 ] 00:34:46.536 [2024-12-13 19:29:20.867205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:46.536 [2024-12-13 19:29:20.891417] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:34:46.536 [2024-12-13 19:29:20.891419] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:46.795 19:29:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:46.795 19:29:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@868 -- # return 0 00:34:46.795 19:29:20 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:46.795 19:29:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:46.795 19:29:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:46.795 19:29:21 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:46.795 19:29:21 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:34:46.795 19:29:21 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:34:46.795 19:29:21 spdkcli_nvmf_rdma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:34:46.795 19:29:21 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:46.795 19:29:21 spdkcli_nvmf_rdma -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:46.795 19:29:21 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:46.795 19:29:21 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:46.795 19:29:21 spdkcli_nvmf_rdma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:46.795 19:29:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:46.795 19:29:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:46.795 19:29:21 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:46.795 19:29:21 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:46.795 19:29:21 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:34:46.795 19:29:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:54.916 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:54.916 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:34:54.916 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:54.916 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:54.916 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:54.916 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:54.916 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:54.916 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:34:54.916 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:54.916 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:34:54.916 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:34:54.917 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:34:54.917 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:34:54.917 Found net devices under 0000:d9:00.0: mlx_0_0 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:34:54.917 Found net devices under 0000:d9:00.1: mlx_0_1 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # is_hw=yes 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # rdma_device_init 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:34:54.917 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:54.917 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:34:54.917 altname enp217s0f0np0 00:34:54.917 altname ens818f0np0 00:34:54.917 inet 192.168.100.8/24 scope global mlx_0_0 00:34:54.917 valid_lft forever preferred_lft forever 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:34:54.917 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:54.917 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:34:54.917 altname enp217s0f1np1 00:34:54.917 altname ens818f1np1 00:34:54.917 inet 192.168.100.9/24 scope global mlx_0_1 00:34:54.917 valid_lft forever preferred_lft forever 00:34:54.917 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # return 0 00:34:54.918 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:54.918 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:34:54.918 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:34:54.918 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:34:54.918 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:34:54.918 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:54.918 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:54.918 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:54.918 19:29:27 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:34:54.918 192.168.100.9' 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:34:54.918 192.168.100.9' 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # head -n 1 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:34:54.918 192.168.100.9' 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # tail -n +2 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # head -n 1 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:54.918 19:29:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:54.918 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:54.918 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:54.918 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:54.918 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:54.918 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:54.918 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:54.918 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:54.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:54.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:54.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:34:54.918 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:54.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:54.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:34:54.918 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:54.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:54.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:34:54.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:34:54.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:54.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:54.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:54.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:54.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:34:54.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:34:54.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:54.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:54.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:54.918 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:54.918 ' 00:34:56.823 [2024-12-13 19:29:30.839356] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc03330/0xad86c0) succeed. 00:34:56.823 [2024-12-13 19:29:30.849090] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbfe6d0/0xb19d60) succeed. 00:34:58.201 [2024-12-13 19:29:32.247069] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:35:00.737 [2024-12-13 19:29:34.722850] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:35:02.642 [2024-12-13 19:29:36.889940] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:35:04.546 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:04.546 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:04.546 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:04.546 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:04.546 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:04.546 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:04.546 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:04.546 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:04.546 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:04.547 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:04.547 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:35:04.547 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:04.547 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:04.547 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:35:04.547 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:04.547 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:04.547 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:35:04.547 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:35:04.547 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:04.547 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:04.547 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:04.547 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:04.547 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:35:04.547 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:35:04.547 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:04.547 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:04.547 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:04.547 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:04.547 19:29:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:04.547 19:29:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:04.547 19:29:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:04.547 19:29:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:04.547 19:29:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:04.547 19:29:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:04.547 19:29:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:35:04.547 19:29:38 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:04.806 19:29:39 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:04.806 19:29:39 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:04.806 19:29:39 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:04.806 19:29:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:04.806 19:29:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:05.066 19:29:39 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:05.066 19:29:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:05.066 19:29:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:05.066 19:29:39 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:05.066 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:05.066 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:05.066 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:05.066 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:35:05.066 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:35:05.066 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:05.066 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:05.066 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:05.066 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:05.066 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:05.066 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:05.066 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:05.066 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:05.066 ' 00:35:10.340 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:10.340 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:10.340 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:10.340 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:10.340 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:35:10.340 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:35:10.340 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:10.340 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:10.340 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:10.340 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:10.340 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:10.340 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:10.340 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:10.340 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:10.600 19:29:44 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:10.600 19:29:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:10.600 19:29:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:10.600 19:29:44 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 516469 00:35:10.600 19:29:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # '[' -z 516469 ']' 00:35:10.600 19:29:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # kill -0 516469 00:35:10.600 19:29:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # uname 00:35:10.600 19:29:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:10.600 19:29:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 516469 00:35:10.600 19:29:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:10.600 19:29:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:10.600 19:29:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 516469' 00:35:10.600 killing process with pid 516469 00:35:10.600 19:29:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@973 -- # kill 516469 00:35:10.600 19:29:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@978 -- # wait 516469 00:35:10.859 19:29:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:35:10.859 19:29:45 spdkcli_nvmf_rdma -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:10.859 19:29:45 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:35:10.859 19:29:45 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:35:10.859 19:29:45 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:35:10.859 19:29:45 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:35:10.859 19:29:45 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:10.859 19:29:45 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:35:10.859 rmmod nvme_rdma 00:35:10.859 rmmod nvme_fabrics 00:35:10.859 19:29:45 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:10.859 19:29:45 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:35:10.859 19:29:45 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:35:10.859 19:29:45 spdkcli_nvmf_rdma -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:10.859 19:29:45 spdkcli_nvmf_rdma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:10.859 19:29:45 spdkcli_nvmf_rdma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:35:10.859 00:35:10.859 real 0m24.703s 00:35:10.859 user 0m54.544s 00:35:10.859 sys 0m6.437s 00:35:10.859 19:29:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:10.859 19:29:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:10.859 ************************************ 00:35:10.859 END TEST spdkcli_nvmf_rdma 00:35:10.859 ************************************ 00:35:10.859 19:29:45 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:10.859 19:29:45 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:10.859 19:29:45 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:10.859 19:29:45 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:10.859 19:29:45 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:10.859 19:29:45 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:10.859 19:29:45 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:10.859 19:29:45 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:10.859 19:29:45 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:10.859 19:29:45 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:10.859 19:29:45 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:10.859 19:29:45 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:10.859 19:29:45 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:10.859 19:29:45 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:10.859 19:29:45 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:11.118 19:29:45 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:11.118 19:29:45 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:11.118 19:29:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:11.118 19:29:45 -- common/autotest_common.sh@10 -- # set +x 00:35:11.118 19:29:45 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:11.118 19:29:45 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:11.118 19:29:45 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:11.118 19:29:45 -- common/autotest_common.sh@10 -- # set +x 00:35:17.689 INFO: APP EXITING 00:35:17.689 INFO: killing all VMs 00:35:17.689 INFO: killing vhost app 00:35:17.689 INFO: EXIT DONE 00:35:20.979 Waiting for block devices as requested 00:35:20.979 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:20.979 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:20.979 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:20.979 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:21.238 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:21.238 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:21.238 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:21.497 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:21.497 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:21.497 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:21.757 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:21.757 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:21.757 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:22.016 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:22.016 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:22.016 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:22.275 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:35:26.469 Cleaning 00:35:26.469 Removing: /var/run/dpdk/spdk0/config 00:35:26.469 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:26.469 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:26.469 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:26.469 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:26.469 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:26.469 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:26.469 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:26.469 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:26.469 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:26.469 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:26.469 Removing: /var/run/dpdk/spdk1/config 00:35:26.469 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:26.469 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:26.469 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:26.469 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:26.469 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:26.469 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:26.469 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:26.469 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:26.469 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:26.469 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:26.469 Removing: /var/run/dpdk/spdk1/mp_socket 00:35:26.469 Removing: /var/run/dpdk/spdk2/config 00:35:26.469 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:26.469 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:26.469 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:26.469 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:26.469 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:26.469 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:26.469 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:26.469 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:26.469 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:26.469 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:26.469 Removing: /var/run/dpdk/spdk3/config 00:35:26.469 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:26.469 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:26.469 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:26.469 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:26.469 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:26.469 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:26.469 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:26.469 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:26.469 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:26.469 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:26.469 Removing: /var/run/dpdk/spdk4/config 00:35:26.469 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:26.469 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:26.469 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:26.469 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:26.469 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:26.469 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:26.469 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:26.469 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:26.469 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:26.469 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:26.469 Removing: /dev/shm/bdevperf_trace.pid152585 00:35:26.469 Removing: /dev/shm/bdev_svc_trace.1 00:35:26.469 Removing: /dev/shm/nvmf_trace.0 00:35:26.469 Removing: /dev/shm/spdk_tgt_trace.pid107982 00:35:26.469 Removing: /var/run/dpdk/spdk0 00:35:26.469 Removing: /var/run/dpdk/spdk1 00:35:26.469 Removing: /var/run/dpdk/spdk2 00:35:26.469 Removing: /var/run/dpdk/spdk3 00:35:26.469 Removing: /var/run/dpdk/spdk4 00:35:26.469 Removing: /var/run/dpdk/spdk_pid104688 00:35:26.469 Removing: /var/run/dpdk/spdk_pid106226 00:35:26.469 Removing: /var/run/dpdk/spdk_pid107982 00:35:26.469 Removing: /var/run/dpdk/spdk_pid108503 00:35:26.469 Removing: /var/run/dpdk/spdk_pid109526 00:35:26.469 Removing: /var/run/dpdk/spdk_pid109739 00:35:26.469 Removing: /var/run/dpdk/spdk_pid110661 00:35:26.469 Removing: /var/run/dpdk/spdk_pid110801 00:35:26.469 Removing: /var/run/dpdk/spdk_pid111058 00:35:26.469 Removing: /var/run/dpdk/spdk_pid116295 00:35:26.469 Removing: /var/run/dpdk/spdk_pid117889 00:35:26.469 Removing: /var/run/dpdk/spdk_pid118209 00:35:26.469 Removing: /var/run/dpdk/spdk_pid118538 00:35:26.469 Removing: /var/run/dpdk/spdk_pid118847 00:35:26.469 Removing: /var/run/dpdk/spdk_pid118975 00:35:26.469 Removing: /var/run/dpdk/spdk_pid119241 00:35:26.469 Removing: /var/run/dpdk/spdk_pid119523 00:35:26.469 Removing: /var/run/dpdk/spdk_pid119852 00:35:26.469 Removing: /var/run/dpdk/spdk_pid120647 00:35:26.469 Removing: /var/run/dpdk/spdk_pid123815 00:35:26.469 Removing: /var/run/dpdk/spdk_pid124133 00:35:26.469 Removing: /var/run/dpdk/spdk_pid124257 00:35:26.469 Removing: /var/run/dpdk/spdk_pid124450 00:35:26.469 Removing: /var/run/dpdk/spdk_pid125018 00:35:26.469 Removing: /var/run/dpdk/spdk_pid125031 00:35:26.469 Removing: /var/run/dpdk/spdk_pid125603 00:35:26.469 Removing: /var/run/dpdk/spdk_pid125610 00:35:26.469 Removing: /var/run/dpdk/spdk_pid125903 00:35:26.469 Removing: /var/run/dpdk/spdk_pid125990 00:35:26.469 Removing: /var/run/dpdk/spdk_pid126220 00:35:26.469 Removing: /var/run/dpdk/spdk_pid126270 00:35:26.469 Removing: /var/run/dpdk/spdk_pid126861 00:35:26.469 Removing: /var/run/dpdk/spdk_pid127144 00:35:26.469 Removing: /var/run/dpdk/spdk_pid127476 00:35:26.469 Removing: /var/run/dpdk/spdk_pid131394 00:35:26.469 Removing: /var/run/dpdk/spdk_pid135660 00:35:26.469 Removing: /var/run/dpdk/spdk_pid146320 00:35:26.469 Removing: /var/run/dpdk/spdk_pid147131 00:35:26.469 Removing: /var/run/dpdk/spdk_pid152585 00:35:26.469 Removing: /var/run/dpdk/spdk_pid152836 00:35:26.469 Removing: /var/run/dpdk/spdk_pid157059 00:35:26.469 Removing: /var/run/dpdk/spdk_pid163176 00:35:26.469 Removing: /var/run/dpdk/spdk_pid165939 00:35:26.469 Removing: /var/run/dpdk/spdk_pid176011 00:35:26.469 Removing: /var/run/dpdk/spdk_pid201089 00:35:26.469 Removing: /var/run/dpdk/spdk_pid205676 00:35:26.469 Removing: /var/run/dpdk/spdk_pid301251 00:35:26.469 Removing: /var/run/dpdk/spdk_pid306495 00:35:26.469 Removing: /var/run/dpdk/spdk_pid312289 00:35:26.469 Removing: /var/run/dpdk/spdk_pid321176 00:35:26.469 Removing: /var/run/dpdk/spdk_pid353190 00:35:26.469 Removing: /var/run/dpdk/spdk_pid358221 00:35:26.469 Removing: /var/run/dpdk/spdk_pid400456 00:35:26.469 Removing: /var/run/dpdk/spdk_pid401305 00:35:26.469 Removing: /var/run/dpdk/spdk_pid402410 00:35:26.469 Removing: /var/run/dpdk/spdk_pid403488 00:35:26.469 Removing: /var/run/dpdk/spdk_pid408179 00:35:26.469 Removing: /var/run/dpdk/spdk_pid414523 00:35:26.469 Removing: /var/run/dpdk/spdk_pid421496 00:35:26.469 Removing: /var/run/dpdk/spdk_pid422554 00:35:26.469 Removing: /var/run/dpdk/spdk_pid423354 00:35:26.469 Removing: /var/run/dpdk/spdk_pid424360 00:35:26.469 Removing: /var/run/dpdk/spdk_pid424685 00:35:26.469 Removing: /var/run/dpdk/spdk_pid429187 00:35:26.469 Removing: /var/run/dpdk/spdk_pid429196 00:35:26.469 Removing: /var/run/dpdk/spdk_pid433733 00:35:26.469 Removing: /var/run/dpdk/spdk_pid434267 00:35:26.469 Removing: /var/run/dpdk/spdk_pid434799 00:35:26.469 Removing: /var/run/dpdk/spdk_pid435587 00:35:26.469 Removing: /var/run/dpdk/spdk_pid435601 00:35:26.469 Removing: /var/run/dpdk/spdk_pid438023 00:35:26.469 Removing: /var/run/dpdk/spdk_pid439892 00:35:26.469 Removing: /var/run/dpdk/spdk_pid441777 00:35:26.469 Removing: /var/run/dpdk/spdk_pid444265 00:35:26.469 Removing: /var/run/dpdk/spdk_pid446150 00:35:26.469 Removing: /var/run/dpdk/spdk_pid448076 00:35:26.469 Removing: /var/run/dpdk/spdk_pid454202 00:35:26.469 Removing: /var/run/dpdk/spdk_pid454861 00:35:26.469 Removing: /var/run/dpdk/spdk_pid457137 00:35:26.469 Removing: /var/run/dpdk/spdk_pid458331 00:35:26.469 Removing: /var/run/dpdk/spdk_pid465347 00:35:26.469 Removing: /var/run/dpdk/spdk_pid468011 00:35:26.469 Removing: /var/run/dpdk/spdk_pid473477 00:35:26.469 Removing: /var/run/dpdk/spdk_pid484344 00:35:26.469 Removing: /var/run/dpdk/spdk_pid484351 00:35:26.469 Removing: /var/run/dpdk/spdk_pid504943 00:35:26.469 Removing: /var/run/dpdk/spdk_pid505178 00:35:26.469 Removing: /var/run/dpdk/spdk_pid511256 00:35:26.469 Removing: /var/run/dpdk/spdk_pid511575 00:35:26.469 Removing: /var/run/dpdk/spdk_pid513683 00:35:26.469 Removing: /var/run/dpdk/spdk_pid516469 00:35:26.469 Clean 00:35:26.469 19:30:00 -- common/autotest_common.sh@1453 -- # return 0 00:35:26.469 19:30:00 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:26.469 19:30:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:26.469 19:30:00 -- common/autotest_common.sh@10 -- # set +x 00:35:26.469 19:30:00 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:26.470 19:30:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:26.470 19:30:00 -- common/autotest_common.sh@10 -- # set +x 00:35:26.728 19:30:00 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:35:26.728 19:30:00 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:35:26.728 19:30:00 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:35:26.728 19:30:00 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:26.728 19:30:00 -- spdk/autotest.sh@398 -- # hostname 00:35:26.729 19:30:00 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:35:26.729 geninfo: WARNING: invalid characters removed from testname! 00:35:48.674 19:30:21 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:35:50.053 19:30:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:35:51.957 19:30:25 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:35:53.336 19:30:27 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:35:55.242 19:30:29 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:35:57.148 19:30:31 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:35:58.526 19:30:32 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:58.526 19:30:32 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:58.526 19:30:32 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt ]] 00:35:58.526 19:30:32 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:58.526 19:30:32 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:58.526 19:30:32 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:35:58.526 + [[ -n 7453 ]] 00:35:58.526 + sudo kill 7453 00:35:58.537 [Pipeline] } 00:35:58.552 [Pipeline] // stage 00:35:58.557 [Pipeline] } 00:35:58.571 [Pipeline] // timeout 00:35:58.577 [Pipeline] } 00:35:58.590 [Pipeline] // catchError 00:35:58.596 [Pipeline] } 00:35:58.610 [Pipeline] // wrap 00:35:58.617 [Pipeline] } 00:35:58.630 [Pipeline] // catchError 00:35:58.640 [Pipeline] stage 00:35:58.642 [Pipeline] { (Epilogue) 00:35:58.655 [Pipeline] catchError 00:35:58.656 [Pipeline] { 00:35:58.671 [Pipeline] echo 00:35:58.673 Cleanup processes 00:35:58.678 [Pipeline] sh 00:35:58.968 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:35:58.968 536552 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:35:58.982 [Pipeline] sh 00:35:59.271 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:35:59.271 ++ grep -v 'sudo pgrep' 00:35:59.271 ++ awk '{print $1}' 00:35:59.271 + sudo kill -9 00:35:59.271 + true 00:35:59.283 [Pipeline] sh 00:35:59.569 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:59.569 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:36:06.135 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:36:10.341 [Pipeline] sh 00:36:10.628 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:10.628 Artifacts sizes are good 00:36:10.642 [Pipeline] archiveArtifacts 00:36:10.650 Archiving artifacts 00:36:11.036 [Pipeline] sh 00:36:11.318 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:36:11.332 [Pipeline] cleanWs 00:36:11.341 [WS-CLEANUP] Deleting project workspace... 00:36:11.341 [WS-CLEANUP] Deferred wipeout is used... 00:36:11.348 [WS-CLEANUP] done 00:36:11.350 [Pipeline] } 00:36:11.365 [Pipeline] // catchError 00:36:11.376 [Pipeline] sh 00:36:11.660 + logger -p user.info -t JENKINS-CI 00:36:11.669 [Pipeline] } 00:36:11.682 [Pipeline] // stage 00:36:11.687 [Pipeline] } 00:36:11.700 [Pipeline] // node 00:36:11.705 [Pipeline] End of Pipeline 00:36:11.755 Finished: SUCCESS